Jan 27 18:42:36 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 18:42:36 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:36 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:42:37 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 18:42:37 crc kubenswrapper[4853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:42:37 crc kubenswrapper[4853]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 18:42:37 crc kubenswrapper[4853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:42:37 crc kubenswrapper[4853]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:42:37 crc kubenswrapper[4853]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 18:42:37 crc kubenswrapper[4853]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.910543 4853 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913362 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913383 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913388 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913392 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913396 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913400 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913404 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913408 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913412 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913417 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913421 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913426 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913434 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913438 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913442 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913447 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913451 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913456 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913460 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913463 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913467 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913470 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913474 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913478 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913481 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913485 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913489 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913492 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913495 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913499 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913502 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913506 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913509 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913513 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913518 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913532 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913537 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913541 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913545 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913549 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913553 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913557 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913561 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913567 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913571 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913575 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913579 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913583 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913587 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913591 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913595 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913599 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913603 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913606 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913610 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913613 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913617 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913621 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913625 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913629 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913632 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913636 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913643 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913648 4853 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913652 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913656 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913660 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913664 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913668 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913672 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.913676 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914411 4853 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914428 4853 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914439 4853 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914445 4853 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914450 4853 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914455 4853 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914461 4853 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914467 4853 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914471 4853 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914477 4853 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914481 4853 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914486 4853 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914490 4853 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914495 4853 flags.go:64] FLAG: --cgroup-root="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914499 4853 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914503 4853 flags.go:64] FLAG: --client-ca-file="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914507 4853 flags.go:64] FLAG: --cloud-config="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914511 4853 flags.go:64] FLAG: --cloud-provider="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914515 4853 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914546 4853 flags.go:64] FLAG: --cluster-domain="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914550 4853 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914554 4853 flags.go:64] FLAG: --config-dir="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914558 4853 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914563 4853 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914569 4853 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914573 4853 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914577 4853 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914581 4853 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914585 4853 flags.go:64] FLAG: --contention-profiling="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914589 4853 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914594 4853 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914598 4853 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914602 4853 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914608 4853 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914612 4853 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914616 4853 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914625 4853 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914629 4853 flags.go:64] FLAG: --enable-server="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914634 4853 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914641 4853 flags.go:64] FLAG: --event-burst="100" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914645 4853 flags.go:64] FLAG: --event-qps="50" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914649 4853 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914654 4853 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914658 4853 flags.go:64] FLAG: --eviction-hard="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914663 4853 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914667 4853 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914671 4853 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914675 4853 flags.go:64] FLAG: --eviction-soft="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914680 4853 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914684 4853 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914688 4853 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914692 4853 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914696 4853 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914700 4853 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914704 4853 flags.go:64] FLAG: --feature-gates="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914709 4853 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914713 4853 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914718 4853 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914722 4853 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914726 4853 flags.go:64] FLAG: --healthz-port="10248" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914730 4853 flags.go:64] FLAG: --help="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914734 4853 flags.go:64] FLAG: --hostname-override="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914738 4853 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914742 4853 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914747 4853 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914751 4853 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914754 4853 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914758 4853 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914763 4853 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914767 4853 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914771 4853 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914776 4853 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914786 4853 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914790 4853 flags.go:64] FLAG: --kube-reserved="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914795 4853 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914799 4853 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914803 4853 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914807 4853 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914811 4853 flags.go:64] FLAG: --lock-file="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914815 4853 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914819 4853 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914823 4853 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914830 4853 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914834 4853 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914838 4853 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914842 4853 flags.go:64] FLAG: --logging-format="text" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914846 4853 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914851 4853 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914855 4853 flags.go:64] FLAG: --manifest-url="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914859 4853 flags.go:64] FLAG: --manifest-url-header="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914865 4853 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914869 4853 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914874 4853 flags.go:64] FLAG: --max-pods="110" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914878 4853 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914882 4853 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914886 4853 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914891 4853 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914895 4853 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914899 4853 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914903 4853 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914913 4853 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914918 4853 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914922 4853 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914926 4853 flags.go:64] FLAG: --pod-cidr="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914930 4853 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914936 4853 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914940 4853 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914945 4853 flags.go:64] FLAG: --pods-per-core="0" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914954 4853 flags.go:64] FLAG: --port="10250" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914958 4853 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914962 4853 flags.go:64] FLAG: --provider-id="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914966 4853 flags.go:64] FLAG: --qos-reserved="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914970 4853 flags.go:64] FLAG: --read-only-port="10255" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914974 4853 flags.go:64] FLAG: --register-node="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914979 4853 flags.go:64] FLAG: --register-schedulable="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914982 4853 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914990 4853 flags.go:64] FLAG: --registry-burst="10" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914994 4853 flags.go:64] FLAG: --registry-qps="5" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.914998 4853 flags.go:64] FLAG: --reserved-cpus="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915002 4853 flags.go:64] FLAG: --reserved-memory="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915008 4853 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915012 4853 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915016 4853 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915021 4853 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915025 4853 flags.go:64] FLAG: --runonce="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915029 4853 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915033 4853 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915039 4853 flags.go:64] FLAG: --seccomp-default="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915043 4853 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915047 4853 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915051 4853 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915056 4853 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915060 4853 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915064 4853 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915068 4853 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915072 4853 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915076 4853 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915080 4853 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915091 4853 flags.go:64] FLAG: --system-cgroups="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915095 4853 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915101 4853 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915105 4853 flags.go:64] FLAG: --tls-cert-file="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915109 4853 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915134 4853 flags.go:64] FLAG: --tls-min-version="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915143 4853 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915148 4853 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915152 4853 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915156 4853 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915163 4853 flags.go:64] FLAG: --v="2" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915169 4853 flags.go:64] FLAG: --version="false" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915176 4853 flags.go:64] FLAG: --vmodule="" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915181 4853 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.915185 4853 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.915991 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.915999 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916004 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916009 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916013 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916017 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916021 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916025 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916028 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916032 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916037 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916040 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916044 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916047 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916051 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916054 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916058 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916063 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916066 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916070 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916073 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916077 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916080 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916083 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916087 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916090 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916094 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916105 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916109 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916112 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916116 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916155 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916159 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916163 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916166 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916170 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916173 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916178 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916182 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916186 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916189 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916193 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916197 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916200 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916203 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916207 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916210 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916214 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916217 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916221 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916226 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916230 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916234 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916238 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916242 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916245 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916249 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916252 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916256 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916260 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916263 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916267 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916270 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916281 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916285 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916288 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916291 4853 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916295 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916299 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916302 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.916311 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.916322 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.924982 4853 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.925459 4853 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925573 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925585 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925592 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925597 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925605 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925612 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925616 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925621 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925626 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925630 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925635 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925639 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925645 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925650 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925654 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925658 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925662 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925667 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925671 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925675 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925679 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925684 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925688 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925693 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925698 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925704 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925709 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925714 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925731 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925736 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925740 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925745 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925749 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925754 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925758 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925762 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925767 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925772 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925776 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925782 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925789 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925794 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925800 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925804 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925810 4853 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925814 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925818 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925822 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925825 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925830 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925835 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925840 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925845 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925849 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925853 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925857 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925861 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925864 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925868 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925871 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925875 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925878 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925882 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925885 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925889 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925892 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925896 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925899 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925904 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925908 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.925912 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.925918 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926036 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926042 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926047 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926051 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926055 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926059 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926063 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926068 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926072 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926076 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926079 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926083 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926086 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926090 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926094 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926097 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926101 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926105 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926108 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926112 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926115 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926134 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926138 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926141 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926147 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926151 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926154 4853 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926158 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926162 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926165 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926169 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926174 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926178 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926182 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926186 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926189 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926193 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926196 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926200 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926203 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926207 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926211 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926218 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926222 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926226 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926229 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926233 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926237 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926240 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926244 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926248 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926251 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926255 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926259 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926262 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926266 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926269 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926274 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926278 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926282 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926286 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926290 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926293 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926297 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926300 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926304 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926307 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926311 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926314 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926318 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:42:37 crc kubenswrapper[4853]: W0127 18:42:37.926321 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.926328 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.927349 4853 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.933049 4853 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.933174 4853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.935136 4853 server.go:997] "Starting client certificate rotation" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.935165 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.935931 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 19:28:24.798097672 +0000 UTC Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.935981 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.958113 4853 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.959960 4853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:42:37 crc kubenswrapper[4853]: E0127 18:42:37.961506 4853 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:37 crc kubenswrapper[4853]: I0127 18:42:37.975687 4853 log.go:25] "Validated CRI v1 runtime API" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.011004 4853 log.go:25] "Validated CRI v1 image API" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.012494 4853 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.018332 4853 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-18-37-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.018371 4853 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.038219 4853 manager.go:217] Machine: {Timestamp:2026-01-27 18:42:38.034339529 +0000 UTC m=+0.496882462 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f3eb2985-2316-42c4-9a73-507610f5aaf9 BootID:10ff71d2-3e1e-470a-b646-0c487d7259d5 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3a:b6:00 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3a:b6:00 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:62:c7:8b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fa:f4:3f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7d:98:57 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f3:f6:7e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:27:38:ce:ca:63 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:d2:80:91:c0:bd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.039008 4853 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.039266 4853 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.041292 4853 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.041793 4853 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.041830 4853 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.042040 4853 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.042050 4853 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.042893 4853 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.042917 4853 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.043612 4853 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.043695 4853 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.048786 4853 kubelet.go:418] "Attempting to sync node with API server" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.048808 4853 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.048828 4853 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.048843 4853 kubelet.go:324] "Adding apiserver pod source" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.048855 4853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.052218 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.052330 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.052218 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.052385 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.053947 4853 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.054661 4853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.055992 4853 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057722 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057744 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057755 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057762 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057773 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057780 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057788 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057799 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057808 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057816 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057826 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.057834 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.060186 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.060601 4853 server.go:1280] "Started kubelet" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.061621 4853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.061687 4853 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.061970 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:38 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.062175 4853 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.062623 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.062650 4853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.062773 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:32:33.955772656 +0000 UTC Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.063268 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.063465 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.063739 4853 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.063764 4853 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.063829 4853 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.064141 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.064206 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.066802 4853 factory.go:55] Registering systemd factory Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.066822 4853 factory.go:221] Registration of the systemd container factory successfully Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068055 4853 factory.go:153] Registering CRI-O factory Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068080 4853 factory.go:221] Registration of the crio container factory successfully Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068167 4853 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068187 4853 factory.go:103] Registering Raw factory Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068208 4853 manager.go:1196] Started watching for new ooms in manager Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068705 4853 server.go:460] "Adding debug handlers to kubelet server" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.068733 4853 manager.go:319] Starting recovery of all containers Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.068077 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eaaabb54601bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:42:38.060577212 +0000 UTC m=+0.523120095,LastTimestamp:2026-01-27 18:42:38.060577212 +0000 UTC m=+0.523120095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075359 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075398 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075410 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075421 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075430 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075440 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075466 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075476 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075489 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075499 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075507 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075516 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075525 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075536 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075546 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075554 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075562 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075571 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075579 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075587 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075597 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075605 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075613 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075621 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075630 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075639 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075668 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075679 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075688 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075697 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075722 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075731 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075740 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075749 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075759 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075767 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075776 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075785 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075794 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075803 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075814 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075824 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075833 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075845 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075855 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075865 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075874 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075883 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075909 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075918 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075927 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075936 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075949 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075958 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075966 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.075977 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076002 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076014 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076025 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076036 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076048 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076057 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076066 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076074 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076082 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076090 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076099 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076106 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076115 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076140 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076147 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076154 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076198 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076206 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076216 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076228 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076240 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076276 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076286 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076295 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076304 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076313 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076321 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076330 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076339 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076349 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076357 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076366 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076375 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076385 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076394 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076403 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076412 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076420 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076430 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076441 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076453 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076463 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076470 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076481 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076490 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076498 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076507 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076515 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076539 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076548 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076558 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076570 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076580 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.076589 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078077 4853 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078101 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078114 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078140 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078164 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078173 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078182 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078191 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078200 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078208 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078219 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078227 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078235 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078244 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078255 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078264 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078272 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078281 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078289 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078298 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078308 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078316 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078324 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078333 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078342 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078351 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078363 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078374 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078384 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078393 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078402 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078412 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078421 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078430 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078439 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078448 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078457 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078484 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078493 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078502 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078511 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078522 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078531 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078541 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078549 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078558 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078566 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078575 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078583 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078593 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078603 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078613 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078623 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078632 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078641 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078650 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078659 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078668 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078677 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078686 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078694 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078703 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078713 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078722 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078730 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078740 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078749 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078757 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078769 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078781 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078793 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078805 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078815 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078823 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078834 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078841 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078850 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078858 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078866 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078875 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078885 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078894 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078902 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078910 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078920 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078929 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078939 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078948 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078957 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078965 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078979 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078987 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.078996 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079004 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079013 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079023 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079033 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079044 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079052 4853 reconstruct.go:97] "Volume reconstruction finished" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.079059 4853 reconciler.go:26] "Reconciler: start to sync state" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.088988 4853 manager.go:324] Recovery completed Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.098246 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.099552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.099608 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.099620 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.100633 4853 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.100648 4853 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.100668 4853 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.109596 4853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.111132 4853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.111177 4853 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.111216 4853 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.111264 4853 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.113329 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.113408 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.117935 4853 policy_none.go:49] "None policy: Start" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.118533 4853 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.118558 4853 state_mem.go:35] "Initializing new in-memory state store" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.164485 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.172437 4853 manager.go:334] "Starting Device Plugin manager" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.172496 4853 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.172509 4853 server.go:79] "Starting device plugin registration server" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.172897 4853 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.172923 4853 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.173312 4853 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.173377 4853 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.173383 4853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.179639 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.211379 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.211493 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.212652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.212691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.212705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.212847 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.212997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.213064 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.215616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.215664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.215677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.215966 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.216271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.216320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.216331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.216339 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.216415 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.216987 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.217006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.217013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.217384 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.217603 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.217652 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218713 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218857 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.218894 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219443 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219311 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219733 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219773 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.219893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.220342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.220360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.220367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.264522 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.273255 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.274412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.274540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.274642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.274745 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.275255 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280334 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280401 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280436 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280557 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280618 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280649 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280684 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280716 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.280747 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381826 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381858 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381873 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381886 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381900 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381916 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381948 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381976 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.381989 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382028 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382467 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382444 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382479 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382495 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382432 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382537 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382505 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382552 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382526 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382581 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.382597 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.475685 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.477432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.477462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.477471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.477490 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.477974 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.542931 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.556804 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.568471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.574735 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.578235 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.588199 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-44cb602946db96635227b14da37d6544d603ae56f3407e1dbe37e4cacd0a5c24 WatchSource:0}: Error finding container 44cb602946db96635227b14da37d6544d603ae56f3407e1dbe37e4cacd0a5c24: Status 404 returned error can't find the container with id 44cb602946db96635227b14da37d6544d603ae56f3407e1dbe37e4cacd0a5c24 Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.590943 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a4d6becd845747a3f10bc8fde2f5569fe723b38cefa68f16e7291dcfcc751936 WatchSource:0}: Error finding container a4d6becd845747a3f10bc8fde2f5569fe723b38cefa68f16e7291dcfcc751936: Status 404 returned error can't find the container with id a4d6becd845747a3f10bc8fde2f5569fe723b38cefa68f16e7291dcfcc751936 Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.593781 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fe9fa61514652b6043941d3d2d87501a77980c237023b3820b9f1d199814df7a WatchSource:0}: Error finding container fe9fa61514652b6043941d3d2d87501a77980c237023b3820b9f1d199814df7a: Status 404 returned error can't find the container with id fe9fa61514652b6043941d3d2d87501a77980c237023b3820b9f1d199814df7a Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.599897 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c155077293147c772588d842b58f761b3cfb745ae33a72b4cbc8daa4f3384155 WatchSource:0}: Error finding container c155077293147c772588d842b58f761b3cfb745ae33a72b4cbc8daa4f3384155: Status 404 returned error can't find the container with id c155077293147c772588d842b58f761b3cfb745ae33a72b4cbc8daa4f3384155 Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.601945 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d1453d33fee2672baae5403490ea06bc72817ddf9d974d638f408c178a73bfd0 WatchSource:0}: Error finding container d1453d33fee2672baae5403490ea06bc72817ddf9d974d638f408c178a73bfd0: Status 404 returned error can't find the container with id d1453d33fee2672baae5403490ea06bc72817ddf9d974d638f408c178a73bfd0 Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.665441 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.878444 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.879872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.879910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.879923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4853]: I0127 18:42:38.879947 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.880511 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Jan 27 18:42:38 crc kubenswrapper[4853]: W0127 18:42:38.892582 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:38 crc kubenswrapper[4853]: E0127 18:42:38.892687 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.062849 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.063831 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:28:09.261057665 +0000 UTC Jan 27 18:42:39 crc kubenswrapper[4853]: W0127 18:42:39.100803 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:39 crc kubenswrapper[4853]: E0127 18:42:39.100903 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.114604 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d1453d33fee2672baae5403490ea06bc72817ddf9d974d638f408c178a73bfd0"} Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.115421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c155077293147c772588d842b58f761b3cfb745ae33a72b4cbc8daa4f3384155"} Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.117229 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fe9fa61514652b6043941d3d2d87501a77980c237023b3820b9f1d199814df7a"} Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.118225 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a4d6becd845747a3f10bc8fde2f5569fe723b38cefa68f16e7291dcfcc751936"} Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.119070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"44cb602946db96635227b14da37d6544d603ae56f3407e1dbe37e4cacd0a5c24"} Jan 27 18:42:39 crc kubenswrapper[4853]: W0127 18:42:39.295791 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:39 crc kubenswrapper[4853]: E0127 18:42:39.295877 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:39 crc kubenswrapper[4853]: E0127 18:42:39.466037 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Jan 27 18:42:39 crc kubenswrapper[4853]: W0127 18:42:39.570019 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:39 crc kubenswrapper[4853]: E0127 18:42:39.570092 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.681132 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.682609 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.682710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.682721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.682769 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:42:39 crc kubenswrapper[4853]: E0127 18:42:39.683271 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Jan 27 18:42:39 crc kubenswrapper[4853]: I0127 18:42:39.989326 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:42:39 crc kubenswrapper[4853]: E0127 18:42:39.991549 4853 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.063483 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.064526 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:57:20.352490603 +0000 UTC Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.127801 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.127871 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.127877 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.128078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.128177 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.129224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.129288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.129311 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.130627 4853 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0" exitCode=0 Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.130701 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.130757 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.131653 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.131701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.131721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.133087 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658" exitCode=0 Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.133167 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.133374 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.136481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.136558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.136584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.136702 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc" exitCode=0 Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.136827 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.136855 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.138508 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.138573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.138601 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.142007 4853 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b" exitCode=0 Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.142102 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b"} Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.142185 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.142289 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.143365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.143403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.143421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.143916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.143961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4853]: I0127 18:42:40.143983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4853]: W0127 18:42:41.049357 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:41 crc kubenswrapper[4853]: E0127 18:42:41.049750 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.063015 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.065182 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:27:25.673281697 +0000 UTC Jan 27 18:42:41 crc kubenswrapper[4853]: E0127 18:42:41.066656 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Jan 27 18:42:41 crc kubenswrapper[4853]: W0127 18:42:41.088993 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Jan 27 18:42:41 crc kubenswrapper[4853]: E0127 18:42:41.089089 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.147064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.147110 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.147137 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.147209 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.148067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.148094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.148106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.149268 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd" exitCode=0 Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.149326 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.149393 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.150704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.150732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.150745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.155814 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.155860 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.155876 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.155887 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.160614 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3"} Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.160685 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.160781 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.161893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.161921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.161932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.162937 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.162972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.162985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.283528 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.285585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.285642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.285657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4853]: I0127 18:42:41.285688 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:42:41 crc kubenswrapper[4853]: E0127 18:42:41.286343 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.065418 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:32:44.540983817 +0000 UTC Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.165231 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1" exitCode=0 Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.165308 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1"} Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.165327 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.166402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.166480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.166512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.170563 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56bbd9ff3f17c00d1976630c99651087c4c06ff17df7e451fdda5d01c65dbfa6"} Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.170616 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.170664 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.170663 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.170699 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.171552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.171585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.171598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.171552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.171681 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.171695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.174388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.174415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4853]: I0127 18:42:42.174427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.066427 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:22:35.477047733 +0000 UTC Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba"} Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177272 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39"} Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b"} Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177294 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d"} Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177302 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d"} Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177243 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177338 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177383 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.177338 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178496 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4853]: I0127 18:42:43.178518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.066909 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:27:42.093452941 +0000 UTC Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.181158 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.182725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.182793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.182807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.364665 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.486784 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.488020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.488083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.488095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.488115 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.512938 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.513073 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.513110 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.514095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.514151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.514164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4853]: I0127 18:42:44.995701 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.067824 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:27:04.684109774 +0000 UTC Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.183272 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.183329 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.184040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.184063 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.184071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.356037 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.356271 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.357414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.357447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4853]: I0127 18:42:45.357456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.068341 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:15:01.436637834 +0000 UTC Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.371883 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.372081 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.373163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.373194 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.373206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.424927 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.425252 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.426549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.426633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.426661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.905893 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.906100 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.907301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.907354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.907367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4853]: I0127 18:42:46.909780 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.069188 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:10:55.041980716 +0000 UTC Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.187387 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.188462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.188498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.188511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.526888 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.527144 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.528398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.528431 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4853]: I0127 18:42:47.528445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.070218 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:29:24.386934038 +0000 UTC Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.133792 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:48 crc kubenswrapper[4853]: E0127 18:42:48.179813 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.189061 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.190157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.190195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.190208 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.356159 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:42:48 crc kubenswrapper[4853]: I0127 18:42:48.356265 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.070768 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:09:35.205678019 +0000 UTC Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.964886 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.965301 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.967465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.967536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.967562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4853]: I0127 18:42:49.974459 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:50 crc kubenswrapper[4853]: I0127 18:42:50.071187 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:15:08.685765614 +0000 UTC Jan 27 18:42:50 crc kubenswrapper[4853]: I0127 18:42:50.193965 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:50 crc kubenswrapper[4853]: I0127 18:42:50.194702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4853]: I0127 18:42:50.194744 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4853]: I0127 18:42:50.194757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4853]: I0127 18:42:51.072181 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:21:17.649254354 +0000 UTC Jan 27 18:42:51 crc kubenswrapper[4853]: W0127 18:42:51.891793 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 18:42:51 crc kubenswrapper[4853]: I0127 18:42:51.891909 4853 trace.go:236] Trace[505241944]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:42:41.890) (total time: 10001ms): Jan 27 18:42:51 crc kubenswrapper[4853]: Trace[505241944]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:51.891) Jan 27 18:42:51 crc kubenswrapper[4853]: Trace[505241944]: [10.001439537s] [10.001439537s] END Jan 27 18:42:51 crc kubenswrapper[4853]: E0127 18:42:51.891947 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.003426 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.003485 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.007885 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.007937 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.072997 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:08:57.830496712 +0000 UTC Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.199052 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.200548 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56bbd9ff3f17c00d1976630c99651087c4c06ff17df7e451fdda5d01c65dbfa6" exitCode=255 Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.200588 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"56bbd9ff3f17c00d1976630c99651087c4c06ff17df7e451fdda5d01c65dbfa6"} Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.200708 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.201365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.201435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.201445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4853]: I0127 18:42:52.201918 4853 scope.go:117] "RemoveContainer" containerID="56bbd9ff3f17c00d1976630c99651087c4c06ff17df7e451fdda5d01c65dbfa6" Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.073581 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:40:33.877358246 +0000 UTC Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.205051 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.206997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352"} Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.207245 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.208047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.208079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4853]: I0127 18:42:53.208091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.074413 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:42:02.004326879 +0000 UTC Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.519053 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.519206 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.519605 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.520179 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.520204 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.520212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4853]: I0127 18:42:54.525333 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:55 crc kubenswrapper[4853]: I0127 18:42:55.075023 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:02:54.140276091 +0000 UTC Jan 27 18:42:55 crc kubenswrapper[4853]: I0127 18:42:55.211948 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:55 crc kubenswrapper[4853]: I0127 18:42:55.213231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4853]: I0127 18:42:55.213290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4853]: I0127 18:42:55.213313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.075939 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:39:00.950368439 +0000 UTC Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.214603 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.215668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.215729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.215746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4853]: E0127 18:42:56.994731 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.998389 4853 trace.go:236] Trace[1047684590]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:42:45.886) (total time: 11111ms): Jan 27 18:42:56 crc kubenswrapper[4853]: Trace[1047684590]: ---"Objects listed" error: 11111ms (18:42:56.998) Jan 27 18:42:56 crc kubenswrapper[4853]: Trace[1047684590]: [11.11135439s] [11.11135439s] END Jan 27 18:42:56 crc kubenswrapper[4853]: I0127 18:42:56.998462 4853 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.000950 4853 trace.go:236] Trace[1714659596]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:42:44.894) (total time: 12106ms): Jan 27 18:42:57 crc kubenswrapper[4853]: Trace[1714659596]: ---"Objects listed" error: 12106ms (18:42:57.000) Jan 27 18:42:57 crc kubenswrapper[4853]: Trace[1714659596]: [12.106818113s] [12.106818113s] END Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.000975 4853 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.002576 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.003751 4853 trace.go:236] Trace[1833449869]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:42:42.628) (total time: 14374ms): Jan 27 18:42:57 crc kubenswrapper[4853]: Trace[1833449869]: ---"Objects listed" error: 14374ms (18:42:57.003) Jan 27 18:42:57 crc kubenswrapper[4853]: Trace[1833449869]: [14.374855767s] [14.374855767s] END Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.003769 4853 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.016052 4853 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.035714 4853 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.051304 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.057283 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.062363 4853 apiserver.go:52] "Watching apiserver" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.064024 4853 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.064357 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.064769 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.064800 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.064859 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.064879 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.064920 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.064961 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.065004 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.065025 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.065048 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.066747 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.067416 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.067448 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.067856 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.067983 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.067998 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.068881 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.070026 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.070288 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.076519 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:41:23.530516453 +0000 UTC Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.079781 4853 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.086728 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.106957 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.118915 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.130821 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.139942 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.152192 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.164359 4853 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.165394 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.177329 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.188044 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.197640 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.222609 4853 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237110 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237174 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237196 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237218 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237244 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237268 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237289 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237312 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237332 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237352 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237374 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237395 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237433 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237452 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237472 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237491 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237513 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237536 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237585 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237575 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237651 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237653 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237680 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237702 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237674 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237810 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237842 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237864 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237867 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237909 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237930 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237947 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237964 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237979 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.237997 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238024 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238055 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238056 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238080 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238065 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238103 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238111 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238157 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238161 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238173 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238304 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238327 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238355 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238407 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238433 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238456 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238479 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238504 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238528 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238536 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238558 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238581 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238605 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238656 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238682 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238737 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238762 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238786 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238807 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238978 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239010 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239041 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239269 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239291 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239315 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239340 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239362 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239390 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239478 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239508 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239541 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239575 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239604 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239632 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239657 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239729 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239760 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239793 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239832 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239884 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239908 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239931 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239955 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239979 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240000 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240023 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240045 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240084 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240110 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240156 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240180 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240249 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240278 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240301 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240323 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240344 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240366 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240388 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240409 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240432 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240511 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240534 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240559 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240582 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240607 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238608 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238672 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238801 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238852 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238876 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.238980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239048 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239157 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239470 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239537 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239585 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.239771 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240422 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240459 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240666 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241120 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240741 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240827 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.240892 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241123 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241185 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241153 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241242 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241294 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241417 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241506 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241522 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241613 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241634 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241757 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241875 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.241902 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242068 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242096 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242248 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242321 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242353 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242388 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242540 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242563 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242571 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242596 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242644 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242697 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242734 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.242752 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:57.742683154 +0000 UTC m=+20.205226037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242761 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.242940 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243138 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243177 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243225 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243283 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243308 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243357 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243397 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243419 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243444 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243470 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243495 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243523 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243547 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243572 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243602 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243634 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243667 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243695 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243753 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243776 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243799 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243821 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243844 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243868 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243891 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243914 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243922 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243936 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243960 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.243982 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244004 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244027 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244051 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244074 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244099 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244113 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244155 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244182 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244205 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244228 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244251 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244277 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244300 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244324 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244327 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244348 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244372 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244396 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244420 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244568 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244593 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244594 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244616 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244640 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244645 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244664 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244707 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244727 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244744 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244762 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244780 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244814 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244845 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244910 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.244978 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245006 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245042 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245107 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245166 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245205 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245488 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245578 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245668 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245774 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245800 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245840 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.245972 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246227 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246353 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246379 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246426 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246451 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246492 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.246644 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247036 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247090 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247184 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247310 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247825 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247884 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.248014 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.248223 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.248354 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.248550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.248757 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.249172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.249314 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250089 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250140 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250158 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250443 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250623 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250731 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250813 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.250855 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251224 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251338 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251611 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251677 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251681 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251703 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.251907 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.252182 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.252509 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.252885 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.252791 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.253457 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.253479 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.253803 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.253649 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.253782 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254167 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254390 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.247900 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254614 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254656 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254684 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254736 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254760 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254809 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254926 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254942 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254956 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254969 4853 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254981 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.254995 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255008 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255022 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255034 4853 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255047 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255060 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255080 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255254 4853 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255270 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255285 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255317 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255330 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255340 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255350 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255359 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255368 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255377 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255386 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255395 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255404 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255415 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255424 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255433 4853 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255443 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255456 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255468 4853 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255480 4853 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255491 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255503 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255515 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256541 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256569 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256582 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256592 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256601 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256614 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256627 4853 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256637 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256670 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256685 4853 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256699 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256711 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256720 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256758 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256767 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256776 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256800 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256809 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256817 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256827 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256835 4853 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256844 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256854 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256864 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256886 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256894 4853 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256903 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256911 4853 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256920 4853 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256929 4853 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256938 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256947 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256956 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256981 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256989 4853 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256999 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257008 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258324 4853 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258353 4853 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258372 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258390 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258497 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258510 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258522 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258536 4853 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258549 4853 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258561 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258575 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258586 4853 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258598 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258610 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258624 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258637 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259136 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259333 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259354 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259366 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259422 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259439 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259452 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259463 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259481 4853 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259494 4853 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259532 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259549 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259570 4853 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259584 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259597 4853 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259615 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259627 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259639 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259651 4853 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259669 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259681 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259692 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259704 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259722 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259735 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259747 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255288 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255340 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255381 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.255967 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256217 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256292 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.256668 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257001 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257226 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.257453 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.260009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.260020 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:57.75999645 +0000 UTC m=+20.222539333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257668 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257687 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257534 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257889 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257911 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257974 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257996 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258010 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258028 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.257641 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258237 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258273 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258349 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258611 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258700 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259555 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259807 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.258229 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.260591 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.260733 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.260808 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.261177 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.261355 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.261584 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.261613 4853 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.261838 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.261877 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.262490 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.262799 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.262911 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263049 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263287 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263326 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263351 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263310 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263410 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.263569 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.264222 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.264378 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.264452 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.264525 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.264460 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.265322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.265363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.259760 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.265810 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.265876 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.266049 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:57.765910225 +0000 UTC m=+20.228453108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.266425 4853 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.266452 4853 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.266477 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.267260 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.272295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.272523 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.273044 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.273443 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.273604 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.274540 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.275648 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.276101 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.276748 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.280696 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.281482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.281772 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.282371 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.282392 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.282451 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.282577 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:57.782488351 +0000 UTC m=+20.245031234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.283071 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.283341 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.283429 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.283442 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.283452 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.283466 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.283484 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:57.783474247 +0000 UTC m=+20.246017130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.283791 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.283820 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.284401 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.284477 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.284541 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.285906 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.287712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.288264 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.293432 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.298830 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.301547 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.306185 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.366921 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367018 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367030 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367075 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367088 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367099 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367110 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367153 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367164 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367176 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367186 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367196 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367207 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367217 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367228 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367239 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367250 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367261 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367273 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367267 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367283 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367365 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367391 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367403 4853 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367414 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367423 4853 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367433 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367442 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367450 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367474 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367483 4853 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367492 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367501 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367510 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367519 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367527 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367549 4853 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367560 4853 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367568 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367576 4853 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367589 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367602 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367631 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367643 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367652 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367661 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367669 4853 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367678 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367686 4853 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367709 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367718 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367726 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367738 4853 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367750 4853 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367762 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367793 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367802 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367810 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367818 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367827 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367835 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367843 4853 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367871 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367883 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367895 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367908 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367917 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367943 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367952 4853 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367960 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367968 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367977 4853 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367987 4853 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.367996 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.368021 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.368030 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.368038 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.368046 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.368055 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.368062 4853 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.383110 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.395623 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.409055 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:57 crc kubenswrapper[4853]: W0127 18:42:57.409540 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4aa1a56e23f5d844b9103e0719a01e5ea1c4892ba3a93dc05aed4091d5ebc7ff WatchSource:0}: Error finding container 4aa1a56e23f5d844b9103e0719a01e5ea1c4892ba3a93dc05aed4091d5ebc7ff: Status 404 returned error can't find the container with id 4aa1a56e23f5d844b9103e0719a01e5ea1c4892ba3a93dc05aed4091d5ebc7ff Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.560350 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.574866 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.578208 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.580398 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.587508 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.598886 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.611278 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.625765 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.637559 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.648511 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.660868 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.672113 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.685254 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.700256 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.711369 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.729822 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.744592 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.758486 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.772986 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.773215 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:58.773181957 +0000 UTC m=+21.235724840 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.773610 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.773720 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.773899 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.773937 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.774038 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:58.7740202 +0000 UTC m=+21.236563083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.774156 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:58.774141713 +0000 UTC m=+21.236684676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.874523 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:57 crc kubenswrapper[4853]: I0127 18:42:57.874913 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.874721 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875119 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875224 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875016 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875381 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875410 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875540 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:58.875386706 +0000 UTC m=+21.337929589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:57 crc kubenswrapper[4853]: E0127 18:42:57.875643 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:58.875632112 +0000 UTC m=+21.338174995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.076935 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:50:28.907568418 +0000 UTC Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.115738 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.116297 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.117054 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.117706 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.118298 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.118790 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.119415 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.120038 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.120685 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.121189 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.121686 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.122383 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.122831 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.123360 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.123843 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.126885 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.127195 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.127499 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.127861 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.128820 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.129390 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.129858 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.130894 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.131323 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.132377 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.132854 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.133869 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.134496 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.135455 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.135970 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.136782 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.137314 4853 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.137413 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.139033 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.140099 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.140483 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.142105 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.143111 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.143309 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.143709 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.144769 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.145434 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.146278 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.146855 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.147848 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.148521 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.149335 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.149826 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.150655 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.151367 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.152202 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.152617 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.153503 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.154069 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.154630 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.155502 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.156879 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.172988 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.194944 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.212864 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.221818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c"} Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.221876 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2d7f749a199e089657a3da426246951f998e8fbd05e7728be752c5ed38033880"} Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.224590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b0094cddd36b68bc371d745e276dcb756217d28f70d1edd22199f06a5ca4468f"} Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.226797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b"} Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.226826 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890"} Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.226840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4aa1a56e23f5d844b9103e0719a01e5ea1c4892ba3a93dc05aed4091d5ebc7ff"} Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.246353 4853 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.257671 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.285213 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.304579 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.332803 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.350139 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.368185 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.381275 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.393559 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.407595 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.424135 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.782763 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.782807 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:43:00.782776382 +0000 UTC m=+23.245319305 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.783137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.783174 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.783258 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.783346 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:00.783320106 +0000 UTC m=+23.245863029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.783274 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.783410 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:00.783396678 +0000 UTC m=+23.245939561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.884535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:58 crc kubenswrapper[4853]: I0127 18:42:58.884585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.884729 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.884749 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.884761 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.884811 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:00.884795285 +0000 UTC m=+23.347338168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.885186 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.885206 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.885216 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:58 crc kubenswrapper[4853]: E0127 18:42:58.885244 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:00.885234717 +0000 UTC m=+23.347777600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.077502 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:17:19.191216906 +0000 UTC Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.112243 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.112297 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:59 crc kubenswrapper[4853]: E0127 18:42:59.112395 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:59 crc kubenswrapper[4853]: E0127 18:42:59.112466 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.112271 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:59 crc kubenswrapper[4853]: E0127 18:42:59.112575 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.231033 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.231655 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.233813 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352" exitCode=255 Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.233875 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352"} Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.233942 4853 scope.go:117] "RemoveContainer" containerID="56bbd9ff3f17c00d1976630c99651087c4c06ff17df7e451fdda5d01c65dbfa6" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.254615 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.254742 4853 scope.go:117] "RemoveContainer" containerID="213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352" Jan 27 18:42:59 crc kubenswrapper[4853]: E0127 18:42:59.254931 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.266057 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.307088 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.329210 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.344095 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.358783 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.372541 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.384765 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4853]: I0127 18:42:59.398666 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.078606 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:12:50.808844068 +0000 UTC Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.237502 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536"} Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.238638 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.241471 4853 scope.go:117] "RemoveContainer" containerID="213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352" Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.241608 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.250702 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.263587 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.277357 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.290548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.302999 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.315335 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.329638 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56bbd9ff3f17c00d1976630c99651087c4c06ff17df7e451fdda5d01c65dbfa6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:52Z\\\",\\\"message\\\":\\\"W0127 18:42:41.292842 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 18:42:41.293253 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769539361 cert, and key in /tmp/serving-cert-874633040/serving-signer.crt, /tmp/serving-cert-874633040/serving-signer.key\\\\nI0127 18:42:41.502411 1 observer_polling.go:159] Starting file observer\\\\nW0127 18:42:41.505561 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 18:42:41.505848 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:41.509632 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874633040/tls.crt::/tmp/serving-cert-874633040/tls.key\\\\\\\"\\\\nF0127 18:42:51.991247 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.348540 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.363160 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.378623 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.393545 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.411297 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.423063 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.434809 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.447915 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.459833 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.471372 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.483269 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.799345 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.799448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.799479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.799553 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:43:04.799503852 +0000 UTC m=+27.262046765 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.799575 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.799604 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.799626 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:04.799615535 +0000 UTC m=+27.262158418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.799642 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:04.799632486 +0000 UTC m=+27.262175369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.900187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:00 crc kubenswrapper[4853]: I0127 18:43:00.900244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900358 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900374 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900386 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900419 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900451 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900465 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900436 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:04.900423046 +0000 UTC m=+27.362965929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:00 crc kubenswrapper[4853]: E0127 18:43:00.900556 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:04.900537489 +0000 UTC m=+27.363080362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:01 crc kubenswrapper[4853]: I0127 18:43:01.079629 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:42:31.981948044 +0000 UTC Jan 27 18:43:01 crc kubenswrapper[4853]: I0127 18:43:01.112410 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:01 crc kubenswrapper[4853]: I0127 18:43:01.112428 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:01 crc kubenswrapper[4853]: I0127 18:43:01.112550 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:01 crc kubenswrapper[4853]: E0127 18:43:01.112659 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:01 crc kubenswrapper[4853]: E0127 18:43:01.112783 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:01 crc kubenswrapper[4853]: E0127 18:43:01.112876 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:02 crc kubenswrapper[4853]: I0127 18:43:02.080199 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:38:40.735412178 +0000 UTC Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.081196 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:35:04.645891164 +0000 UTC Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.111838 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.111862 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.111848 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.111957 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.112101 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.112229 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.169163 4853 csr.go:261] certificate signing request csr-r72zj is approved, waiting to be issued Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.181513 4853 csr.go:257] certificate signing request csr-r72zj is issued Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.224243 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-l59xt"] Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.224569 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.226039 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.232716 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2hzcp"] Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.233117 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.234099 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.234776 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.236591 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.236946 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.236973 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.237019 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.249645 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.267979 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.284081 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.297532 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.313310 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-w4d5n"] Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.313485 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.314063 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.316858 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.317168 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.317543 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.318051 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.318383 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.320662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhhx\" (UniqueName: \"kubernetes.io/projected/f9e82bc6-1fab-4815-a64e-2ebbf8b72315-kube-api-access-dnhhx\") pod \"node-resolver-l59xt\" (UID: \"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\") " pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.320700 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c088d143-dd9c-4c77-b9a3-3a0113306f41-serviceca\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.320737 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f9e82bc6-1fab-4815-a64e-2ebbf8b72315-hosts-file\") pod \"node-resolver-l59xt\" (UID: \"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\") " pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.320782 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqtv\" (UniqueName: \"kubernetes.io/projected/c088d143-dd9c-4c77-b9a3-3a0113306f41-kube-api-access-2zqtv\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.320803 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c088d143-dd9c-4c77-b9a3-3a0113306f41-host\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.321269 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6gqj2"] Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.321618 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.323058 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.323580 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.323872 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.323972 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.326960 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.338739 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.351030 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.366083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.384355 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.397383 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.403052 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.404757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.404786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.404796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.405343 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421310 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c088d143-dd9c-4c77-b9a3-3a0113306f41-host\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421356 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-system-cni-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421386 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-k8s-cni-cncf-io\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b8a89b1e-bef8-4cb7-930c-480d3125778c-rootfs\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421439 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-netns\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421466 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c088d143-dd9c-4c77-b9a3-3a0113306f41-host\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421524 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-cni-binary-copy\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421684 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-conf-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421787 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b8a89b1e-bef8-4cb7-930c-480d3125778c-mcd-auth-proxy-config\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421848 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqtv\" (UniqueName: \"kubernetes.io/projected/c088d143-dd9c-4c77-b9a3-3a0113306f41-kube-api-access-2zqtv\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421885 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-os-release\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-cni-bin\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421921 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-multus-certs\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421957 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-socket-dir-parent\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421974 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-cni-multus\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.421996 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhhx\" (UniqueName: \"kubernetes.io/projected/f9e82bc6-1fab-4815-a64e-2ebbf8b72315-kube-api-access-dnhhx\") pod \"node-resolver-l59xt\" (UID: \"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\") " pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422013 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhn6\" (UniqueName: \"kubernetes.io/projected/b8a89b1e-bef8-4cb7-930c-480d3125778c-kube-api-access-xbhn6\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-kubelet\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c088d143-dd9c-4c77-b9a3-3a0113306f41-serviceca\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422077 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8a89b1e-bef8-4cb7-930c-480d3125778c-proxy-tls\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422098 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f9e82bc6-1fab-4815-a64e-2ebbf8b72315-hosts-file\") pod \"node-resolver-l59xt\" (UID: \"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\") " pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-hostroot\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422157 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-daemon-config\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422177 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-cni-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422193 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkvb\" (UniqueName: \"kubernetes.io/projected/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-kube-api-access-8jkvb\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422212 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-cnibin\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422229 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-etc-kubernetes\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.422235 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f9e82bc6-1fab-4815-a64e-2ebbf8b72315-hosts-file\") pod \"node-resolver-l59xt\" (UID: \"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\") " pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.423369 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c088d143-dd9c-4c77-b9a3-3a0113306f41-serviceca\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.424839 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.444317 4853 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.444641 4853 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.445667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.445705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.445719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.445735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.445747 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.460819 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqtv\" (UniqueName: \"kubernetes.io/projected/c088d143-dd9c-4c77-b9a3-3a0113306f41-kube-api-access-2zqtv\") pod \"node-ca-2hzcp\" (UID: \"c088d143-dd9c-4c77-b9a3-3a0113306f41\") " pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.461870 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.463762 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhhx\" (UniqueName: \"kubernetes.io/projected/f9e82bc6-1fab-4815-a64e-2ebbf8b72315-kube-api-access-dnhhx\") pod \"node-resolver-l59xt\" (UID: \"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\") " pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.508438 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523130 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-conf-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523387 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b8a89b1e-bef8-4cb7-930c-480d3125778c-mcd-auth-proxy-config\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-os-release\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-cni-bin\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-cni-bin\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523652 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-os-release\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523271 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-conf-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523851 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-multus-certs\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.523706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-multus-certs\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524048 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-socket-dir-parent\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-cni-multus\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524338 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-cni-multus\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524058 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b8a89b1e-bef8-4cb7-930c-480d3125778c-mcd-auth-proxy-config\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524100 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-socket-dir-parent\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524534 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbhn6\" (UniqueName: \"kubernetes.io/projected/b8a89b1e-bef8-4cb7-930c-480d3125778c-kube-api-access-xbhn6\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-kubelet\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524778 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8a89b1e-bef8-4cb7-930c-480d3125778c-proxy-tls\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524886 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-hostroot\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-hostroot\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.524692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-var-lib-kubelet\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525094 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-daemon-config\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525211 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-cni-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkvb\" (UniqueName: \"kubernetes.io/projected/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-kube-api-access-8jkvb\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-cnibin\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525497 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-cni-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-multus-daemon-config\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-cnibin\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525678 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-etc-kubernetes\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525534 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-etc-kubernetes\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525879 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-system-cni-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.525980 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-k8s-cni-cncf-io\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-k8s-cni-cncf-io\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526013 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-system-cni-dir\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b8a89b1e-bef8-4cb7-930c-480d3125778c-rootfs\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526311 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b8a89b1e-bef8-4cb7-930c-480d3125778c-rootfs\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526411 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-netns\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526420 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-host-run-netns\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.526572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-cni-binary-copy\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.527016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-cni-binary-copy\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.527138 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.529620 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8a89b1e-bef8-4cb7-930c-480d3125778c-proxy-tls\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.534248 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.535101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.535271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.535353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.535437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.535502 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.537640 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l59xt" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.543246 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkvb\" (UniqueName: \"kubernetes.io/projected/dd2c07de-2ac9-4074-9fb0-519cfaf37f69-kube-api-access-8jkvb\") pod \"multus-w4d5n\" (UID: \"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\") " pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.545065 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbhn6\" (UniqueName: \"kubernetes.io/projected/b8a89b1e-bef8-4cb7-930c-480d3125778c-kube-api-access-xbhn6\") pod \"machine-config-daemon-6gqj2\" (UID: \"b8a89b1e-bef8-4cb7-930c-480d3125778c\") " pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.546795 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2hzcp" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.550559 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.567729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.567724 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.567765 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.567776 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.567792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.567804 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.581244 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.583276 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.585967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.585994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.586002 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.586014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.586023 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.595416 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.598442 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.601826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.601866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.601879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.601900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.601914 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.618599 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: E0127 18:43:03.618764 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.620513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.620535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.620543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.620555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.620565 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.623175 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.627723 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w4d5n" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.635668 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.641636 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: W0127 18:43:03.656382 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8a89b1e_bef8_4cb7_930c_480d3125778c.slice/crio-182c97e3bc463b6809e3818b4184d23c6759869ec7c4315b4db755d8a1b423f7 WatchSource:0}: Error finding container 182c97e3bc463b6809e3818b4184d23c6759869ec7c4315b4db755d8a1b423f7: Status 404 returned error can't find the container with id 182c97e3bc463b6809e3818b4184d23c6759869ec7c4315b4db755d8a1b423f7 Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.657217 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.669891 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.682792 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.699748 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.718745 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ght98"] Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.719506 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.721664 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.722540 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.726448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.726491 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.726503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.726520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.726532 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.735716 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.751964 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.763996 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.776379 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.793683 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.806703 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.823155 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.829326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.829367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.829379 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.829394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.829405 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.829895 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzfq\" (UniqueName: \"kubernetes.io/projected/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-kube-api-access-ttzfq\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.830077 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-os-release\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.830278 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-system-cni-dir\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.830459 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cni-binary-copy\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.830662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.831722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cnibin\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.831867 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.862670 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.880290 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.894955 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.912230 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932368 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932415 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cnibin\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932459 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932491 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzfq\" (UniqueName: \"kubernetes.io/projected/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-kube-api-access-ttzfq\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932531 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-os-release\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932572 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932592 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-system-cni-dir\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932584 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932552 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-system-cni-dir\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932829 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cni-binary-copy\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932488 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cnibin\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.932896 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-os-release\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.933263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.935771 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-cni-binary-copy\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.952203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzfq\" (UniqueName: \"kubernetes.io/projected/ce95829c-f3fb-493c-bf9a-a3515fe6ddac-kube-api-access-ttzfq\") pod \"multus-additional-cni-plugins-ght98\" (UID: \"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\") " pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.961018 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.972518 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:03 crc kubenswrapper[4853]: I0127 18:43:03.991889 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.034845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.034894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.034904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.034922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.034939 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.061248 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ght98" Jan 27 18:43:04 crc kubenswrapper[4853]: W0127 18:43:04.070800 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce95829c_f3fb_493c_bf9a_a3515fe6ddac.slice/crio-ac191fd83e36060a7cd73f9c30919e6110db9326d4915937842d6ffbf60129f5 WatchSource:0}: Error finding container ac191fd83e36060a7cd73f9c30919e6110db9326d4915937842d6ffbf60129f5: Status 404 returned error can't find the container with id ac191fd83e36060a7cd73f9c30919e6110db9326d4915937842d6ffbf60129f5 Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.081348 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:14:01.408533325 +0000 UTC Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.137048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.137327 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.137403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.137497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.137590 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.163162 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hdtbk"] Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.163971 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.167644 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.167952 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.168104 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.168175 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.168317 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.168441 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.169931 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.183250 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 18:38:03 +0000 UTC, rotation deadline is 2026-10-20 12:41:01.337876123 +0000 UTC Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.183309 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6377h57m57.15456974s for next certificate rotation Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.196060 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.207806 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.220807 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.234634 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235087 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-systemd-units\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235105 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-slash\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235153 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-netd\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235186 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-netns\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235213 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-env-overrides\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4vs\" (UniqueName: \"kubernetes.io/projected/ebbc7598-422a-43ad-ae98-88e57ec80b9c-kube-api-access-cq4vs\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235253 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-systemd\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235267 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235282 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235300 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-config\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235317 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-log-socket\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235333 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-script-lib\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235358 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-ovn\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-kubelet\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-node-log\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235408 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-var-lib-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235423 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-etc-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235438 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235456 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovn-node-metrics-cert\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.235564 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-bin\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.239464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.239492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.239503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.239521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.239532 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.250817 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.252353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerStarted","Data":"9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.252422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerStarted","Data":"a337d485c98e76118beb49319ec904dcb11248b7ba99f48346153ca02c02bac7"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.254017 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2hzcp" event={"ID":"c088d143-dd9c-4c77-b9a3-3a0113306f41","Type":"ContainerStarted","Data":"1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.254051 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2hzcp" event={"ID":"c088d143-dd9c-4c77-b9a3-3a0113306f41","Type":"ContainerStarted","Data":"8ab5a409ae37d37f6560ff42f0aa620dffc9f5916e5352d9e329e38920a9d1e2"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.256133 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l59xt" event={"ID":"f9e82bc6-1fab-4815-a64e-2ebbf8b72315","Type":"ContainerStarted","Data":"6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.256169 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l59xt" event={"ID":"f9e82bc6-1fab-4815-a64e-2ebbf8b72315","Type":"ContainerStarted","Data":"85d94e987695b715f16a230e109d66130c4bc6160b48e00340b7290d1b96cab2"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.256918 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerStarted","Data":"ac191fd83e36060a7cd73f9c30919e6110db9326d4915937842d6ffbf60129f5"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.258345 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.258367 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.258378 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"182c97e3bc463b6809e3818b4184d23c6759869ec7c4315b4db755d8a1b423f7"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.265548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.279419 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.316313 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.332422 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-systemd\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336101 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336206 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-systemd\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336215 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-log-socket\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336270 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-log-socket\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336299 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-config\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336326 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336340 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-script-lib\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-ovn\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336480 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-kubelet\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-node-log\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336561 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-etc-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336594 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-var-lib-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336617 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovn-node-metrics-cert\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336701 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-bin\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-systemd-units\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336748 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-slash\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336772 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-netd\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-netns\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336826 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-env-overrides\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.336876 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq4vs\" (UniqueName: \"kubernetes.io/projected/ebbc7598-422a-43ad-ae98-88e57ec80b9c-kube-api-access-cq4vs\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337371 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-ovn\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-bin\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337873 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-kubelet\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-netns\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-systemd-units\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337918 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-slash\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.337938 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-netd\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338154 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-var-lib-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338191 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-node-log\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338217 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-etc-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338245 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-openvswitch\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-env-overrides\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338683 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-script-lib\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.338816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-config\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.341402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovn-node-metrics-cert\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.344156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.344198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.344211 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.344229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.344241 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.352807 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.361434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq4vs\" (UniqueName: \"kubernetes.io/projected/ebbc7598-422a-43ad-ae98-88e57ec80b9c-kube-api-access-cq4vs\") pod \"ovnkube-node-hdtbk\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.370700 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.381576 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.392104 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.404019 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.420183 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.432996 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.446042 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.446599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.446695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.446768 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.446861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.446942 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.457280 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.469170 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.476737 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.483564 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: W0127 18:43:04.490364 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbc7598_422a_43ad_ae98_88e57ec80b9c.slice/crio-8d6f8413d913a60fd3c0220d73f447034f13dd1f2a140277cbf62700d8a164fd WatchSource:0}: Error finding container 8d6f8413d913a60fd3c0220d73f447034f13dd1f2a140277cbf62700d8a164fd: Status 404 returned error can't find the container with id 8d6f8413d913a60fd3c0220d73f447034f13dd1f2a140277cbf62700d8a164fd Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.498731 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.514115 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.535555 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.549506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.549560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.549575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.549594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.549606 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.551504 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.566958 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.581477 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.594376 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.604844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.617292 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.634956 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.651922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.651996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.652012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.652035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.652051 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.754136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.754180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.754192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.754207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.754217 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.843046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.843253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.843326 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:43:12.843288779 +0000 UTC m=+35.305831762 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.843377 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.843402 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.843445 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:12.843431133 +0000 UTC m=+35.305974176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.843618 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.843723 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:12.84370091 +0000 UTC m=+35.306243873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.855955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.855997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.856008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.856025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.856040 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.944234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.944307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944445 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944461 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944463 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944511 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944527 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944471 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944589 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:12.944570472 +0000 UTC m=+35.407113395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:04 crc kubenswrapper[4853]: E0127 18:43:04.944632 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:12.944618053 +0000 UTC m=+35.407160936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.958473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.958512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.958526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.958543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4853]: I0127 18:43:04.958553 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.061326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.061607 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.061616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.061628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.061637 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.082562 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:31:08.744414618 +0000 UTC Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.112049 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.112086 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.112156 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:05 crc kubenswrapper[4853]: E0127 18:43:05.112241 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:05 crc kubenswrapper[4853]: E0127 18:43:05.112329 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:05 crc kubenswrapper[4853]: E0127 18:43:05.112426 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.163439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.163484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.163492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.163507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.163517 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.262518 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce95829c-f3fb-493c-bf9a-a3515fe6ddac" containerID="10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f" exitCode=0 Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.262594 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerDied","Data":"10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.264504 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d" exitCode=0 Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.264531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.264583 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"8d6f8413d913a60fd3c0220d73f447034f13dd1f2a140277cbf62700d8a164fd"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.265610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.265679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.265706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.265735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.265757 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.278629 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.293399 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.308961 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.333844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.350383 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.366506 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.368196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.368230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.368240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.368259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.368271 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.381046 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.396926 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.411877 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.479914 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.480550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.480622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.480636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.480679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.480697 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.500040 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.526427 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.539498 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.553421 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.568338 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.582953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.582986 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.582995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.583010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.583020 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.585555 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.607208 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.619858 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.641514 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.655273 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.670191 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.680400 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.685089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.685244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.685258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.685272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.685652 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.692100 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.710954 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.724986 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.740850 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.755798 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.778195 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.787588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.787634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.787645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.787661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.787673 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.798049 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.812274 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.889851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.889889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.889897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.889911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.889921 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.991895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.991931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.991940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.991953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4853]: I0127 18:43:05.991963 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.082998 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:41:31.588840902 +0000 UTC Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.094051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.094089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.094099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.094114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.094140 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.196420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.196466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.196477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.196495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.196509 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.270486 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce95829c-f3fb-493c-bf9a-a3515fe6ddac" containerID="b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881" exitCode=0 Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.270540 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerDied","Data":"b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.275103 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.275160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.275176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.275190 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.275201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.275212 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.290016 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.298667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.298715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.298727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.298745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.298757 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.324895 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.338543 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.361965 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.381442 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.393845 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.401254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.401282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.401291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.401303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.401794 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.404651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.417587 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.436882 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.450040 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.462381 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.474257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.484894 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.497455 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.504554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.504603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.504615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.504654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.504667 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.512951 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.607204 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.607263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.607274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.607294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.607306 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.709350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.709387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.709396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.709411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.709422 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.811756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.811796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.811808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.811823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.811835 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.914069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.914108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.914139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.914158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4853]: I0127 18:43:06.914169 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.016843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.016876 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.016885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.016898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.016911 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.083477 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:18:40.670833768 +0000 UTC Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.111940 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.111975 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.112078 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:07 crc kubenswrapper[4853]: E0127 18:43:07.112180 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:07 crc kubenswrapper[4853]: E0127 18:43:07.112327 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:07 crc kubenswrapper[4853]: E0127 18:43:07.112527 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.119478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.119514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.119523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.119537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.119554 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.222371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.222412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.222421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.222437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.222452 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.281364 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce95829c-f3fb-493c-bf9a-a3515fe6ddac" containerID="d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839" exitCode=0 Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.281443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerDied","Data":"d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.294937 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.322160 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.324665 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.324735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.324763 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.324805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.324827 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.338921 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.351408 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.361189 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.372501 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.384578 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.398211 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.412318 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.423985 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.427801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.427840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.427852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.427871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.427884 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.436936 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.446809 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.459738 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.474445 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.496596 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:07Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.530306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.530335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.530343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.530356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.530365 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.633154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.633185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.633193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.633206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.633215 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.736549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.736584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.736597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.736613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.736624 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.838936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.838971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.838983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.838999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.839011 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.936029 4853 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.960577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.960660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.960901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.960926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4853]: I0127 18:43:07.960948 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.064529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.065058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.065069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.065085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.065096 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.083957 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:13:21.731538539 +0000 UTC Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.124363 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.136374 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.148716 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.161539 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.167920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.167948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.167957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.167970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.167982 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.175523 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.196808 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.207525 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.220091 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.230895 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.254928 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.270636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.270677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.270690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.270708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.270722 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.279950 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.286934 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce95829c-f3fb-493c-bf9a-a3515fe6ddac" containerID="205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03" exitCode=0 Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.287026 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerDied","Data":"205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.296330 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.314591 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.328446 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.342794 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.359208 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.373103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.373160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.373172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.373187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.373201 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.373686 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.387910 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.422416 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.435168 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.454329 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.471293 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.475844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.475867 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.475875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.475888 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.475897 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.489007 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.499781 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.511054 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.525238 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.537533 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.550280 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.562725 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.577061 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.579247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.579332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.579356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.579387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.579412 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.596034 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.610334 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.611045 4853 scope.go:117] "RemoveContainer" containerID="213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.684081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.684115 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.684138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.684153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.684163 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.788049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.788088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.788101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.788120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.788143 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.890587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.890636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.890648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.890668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.890681 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.996055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.996141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.996159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.996183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4853]: I0127 18:43:08.996200 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.084752 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:17:20.286979564 +0000 UTC Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.098501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.098525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.098533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.098546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.098555 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.112278 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.112335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.112429 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:09 crc kubenswrapper[4853]: E0127 18:43:09.112624 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:09 crc kubenswrapper[4853]: E0127 18:43:09.112725 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:09 crc kubenswrapper[4853]: E0127 18:43:09.112806 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.202200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.202239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.202251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.202273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.202284 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.304512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.304554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.304565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.304585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.304595 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.321607 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce95829c-f3fb-493c-bf9a-a3515fe6ddac" containerID="12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576" exitCode=0 Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.321686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerDied","Data":"12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.325161 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.331792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.332242 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.338740 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.351622 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.366844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.379469 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.390811 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.402616 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.406596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.406634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.406646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.406663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.406677 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.413183 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.426531 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.446718 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.470925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.482176 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.497893 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.510427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.510474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.510485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.510502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.510515 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.517950 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.536634 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.555022 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.568321 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.582643 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.597260 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.611537 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.612823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.612932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.613008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.613090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.613201 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.624437 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.639363 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.649462 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.663831 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.677473 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.700266 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.712799 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.715721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.715762 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.715777 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.715795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.715808 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.724432 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.743090 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.757448 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.769294 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.818555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.818599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.818608 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.818625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.818636 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.922213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.922248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.922259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.922276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4853]: I0127 18:43:09.922288 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.024430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.024462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.024470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.024483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.024493 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.085166 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:31:09.407192502 +0000 UTC Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.126049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.126090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.126103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.126117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.126144 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.228433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.228762 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.228774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.228790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.228801 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.331628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.331675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.331687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.331705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.331717 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.342440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.342672 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.347363 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce95829c-f3fb-493c-bf9a-a3515fe6ddac" containerID="3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13" exitCode=0 Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.347430 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerDied","Data":"3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.360180 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.376651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.392444 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.408106 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.421823 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.428771 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.438164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.438260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.438273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.438288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.438299 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.440447 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.453408 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.466008 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.477039 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.488111 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.497938 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.508641 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.525844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.537197 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.540209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.540238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.540249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.540263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.540273 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.551101 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.566401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.576891 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.593982 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.604131 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.620725 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.635253 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.642737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.642770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.642780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.642793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.642804 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.644937 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.653025 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.665655 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.676999 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.689495 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.701896 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.712726 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.724386 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.734693 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.745035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.745065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.745076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.745093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.745105 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.847159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.847196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.847206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.847221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.847233 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.949574 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.949615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.949624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.949638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4853]: I0127 18:43:10.949650 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.052942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.052983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.052992 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.053009 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.053020 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.086152 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:20:24.883318657 +0000 UTC Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.112169 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.112241 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.112187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:11 crc kubenswrapper[4853]: E0127 18:43:11.112369 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:11 crc kubenswrapper[4853]: E0127 18:43:11.112478 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:11 crc kubenswrapper[4853]: E0127 18:43:11.112541 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.156034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.156069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.156077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.156090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.156100 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.258261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.258419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.258437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.258457 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.258471 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.355651 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" event={"ID":"ce95829c-f3fb-493c-bf9a-a3515fe6ddac","Type":"ContainerStarted","Data":"918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.355713 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.356434 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.360838 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.360985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.361061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.361177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.361255 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.376186 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.377404 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.387709 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.400594 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.416264 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.429443 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.442482 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.453265 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.463858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.463900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.463911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.463924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.463935 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.472492 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.488664 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.501603 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.519269 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.533517 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.545746 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.557325 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.566407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.566462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.566476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.566493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.566504 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.569600 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.586204 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.607858 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.619151 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.632048 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.644317 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.655136 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.664451 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.668072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.668113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.668141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.668158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.668177 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.675956 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.693203 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.704896 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.719469 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.733292 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.744373 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.756083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.770341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.770373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.770384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.770401 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.770412 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.798438 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.873095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.873172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.873185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.873202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.873215 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.976285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.976368 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.976392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.976424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4853]: I0127 18:43:11.976447 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.078902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.078953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.078972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.078997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.079015 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.086473 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:10:28.165659759 +0000 UTC Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.182114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.182172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.182181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.182197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.182208 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.286588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.286663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.286675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.286694 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.286711 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.359663 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.389594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.389644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.389655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.389671 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.389683 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.492500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.492560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.492572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.492592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.492604 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.596523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.596564 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.596573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.596586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.596596 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.699600 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.699660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.699669 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.699686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.699698 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.803922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.803976 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.803990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.804026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.804040 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.907263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.907362 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.907392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.907430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.907453 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.926299 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.926463 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:12 crc kubenswrapper[4853]: E0127 18:43:12.926563 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:43:28.926523676 +0000 UTC m=+51.389066599 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:43:12 crc kubenswrapper[4853]: E0127 18:43:12.926635 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:12 crc kubenswrapper[4853]: E0127 18:43:12.926738 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:28.926710071 +0000 UTC m=+51.389252984 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:12 crc kubenswrapper[4853]: I0127 18:43:12.926777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:12 crc kubenswrapper[4853]: E0127 18:43:12.926874 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:12 crc kubenswrapper[4853]: E0127 18:43:12.926948 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:28.926929887 +0000 UTC m=+51.389472770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.010998 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.011045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.011054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.011077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.011088 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.027759 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.027803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.027925 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.027940 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.027952 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.027948 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.027984 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.027997 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.028005 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:29.027991794 +0000 UTC m=+51.490534677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.028055 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:29.028039196 +0000 UTC m=+51.490582079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.087574 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:03:53.761976388 +0000 UTC Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.111950 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.111979 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.112105 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.112160 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.112176 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.112324 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.113391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.113420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.113427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.113439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.113449 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.215834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.215877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.215887 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.215908 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.215918 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.318513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.318830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.318839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.318853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.318862 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.363105 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.421425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.421465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.421474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.421489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.421499 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.523797 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.523856 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.523878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.523907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.523927 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.626610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.626669 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.626687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.626711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.626727 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.729474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.729517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.729527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.729544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.729554 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.831251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.831294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.831306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.831324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.831336 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.915661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.915924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.916005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.916117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.916319 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.930994 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.936652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.936704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.936725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.936753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.936775 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.951605 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.958331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.958377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.958390 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.958410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.958422 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.974873 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.978883 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.978932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.978945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.979045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.979061 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4853]: E0127 18:43:13.994675 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.998767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.998814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.998822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.998836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4853]: I0127 18:43:13.998846 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: E0127 18:43:14.012795 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: E0127 18:43:14.012948 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.014646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.014683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.014697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.014714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.014726 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.088646 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:09:07.092225368 +0000 UTC Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.117479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.117521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.117537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.117552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.117562 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.222460 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.222511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.222522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.222541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.222553 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.325529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.325615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.325634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.325660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.325678 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.368286 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/0.log" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.371380 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544" exitCode=1 Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.371432 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.372192 4853 scope.go:117] "RemoveContainer" containerID="474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.387028 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.398660 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.413553 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.428480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.428510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.428522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.428538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.428553 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.428898 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.443325 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.463059 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.473915 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.488417 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.503976 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.523878 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.531015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.531047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.531056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.531070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.531082 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.538177 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.552082 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.574340 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.586929 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.600096 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.633581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.633622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.633631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.633646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.633657 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.735632 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.735664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.735673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.735686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.735696 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.837544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.837586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.837598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.837613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.837625 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.939665 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.939703 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.939714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.939730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4853]: I0127 18:43:14.939740 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.041347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.041395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.041410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.041428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.041442 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.088926 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:49:47.49968843 +0000 UTC Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.112042 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.112077 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:15 crc kubenswrapper[4853]: E0127 18:43:15.112198 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.112137 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:15 crc kubenswrapper[4853]: E0127 18:43:15.112365 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:15 crc kubenswrapper[4853]: E0127 18:43:15.112519 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.144037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.144075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.144086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.144103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.144136 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.245805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.245841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.245852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.245868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.245880 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.354086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.354135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.354143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.354157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.354167 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.375733 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/0.log" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.378039 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.378168 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.391890 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.413006 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.422645 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.436443 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.448638 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.455803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.455837 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.455846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.455859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.455869 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.460143 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.468361 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.478108 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.493938 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.504394 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.515998 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.529704 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.543296 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.556549 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.557927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.557968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.557977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.557992 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.558003 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.571112 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.660342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.660377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.660388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.660403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.660416 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.762657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.762697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.762705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.762720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.762731 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.865060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.865174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.865193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.865217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.865235 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.968469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.968569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.968587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.968611 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4853]: I0127 18:43:15.968630 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.070966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.071020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.071037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.071064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.071083 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.090044 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:23:12.199188104 +0000 UTC Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.173470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.173509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.173520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.173537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.173581 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.276451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.276526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.276549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.276573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.276597 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.380064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.380103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.380152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.380177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.380192 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.384402 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/1.log" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.385458 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/0.log" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.392861 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982" exitCode=1 Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.392920 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.392963 4853 scope.go:117] "RemoveContainer" containerID="474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.397080 4853 scope.go:117] "RemoveContainer" containerID="67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982" Jan 27 18:43:16 crc kubenswrapper[4853]: E0127 18:43:16.397630 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.421756 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.440329 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl"] Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.441205 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.444981 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.445028 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.446409 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.459034 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.465469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99e56db9-b380-4e21-9810-2dfa1517d5ac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.465556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbc8r\" (UniqueName: \"kubernetes.io/projected/99e56db9-b380-4e21-9810-2dfa1517d5ac-kube-api-access-sbc8r\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.465591 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99e56db9-b380-4e21-9810-2dfa1517d5ac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.465623 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99e56db9-b380-4e21-9810-2dfa1517d5ac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.477005 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.482858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.482904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.482919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.482942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.482959 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.491941 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.513945 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.523456 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.535537 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.546682 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.558026 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.566564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99e56db9-b380-4e21-9810-2dfa1517d5ac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.566684 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.566815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99e56db9-b380-4e21-9810-2dfa1517d5ac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.566907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbc8r\" (UniqueName: \"kubernetes.io/projected/99e56db9-b380-4e21-9810-2dfa1517d5ac-kube-api-access-sbc8r\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.567002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99e56db9-b380-4e21-9810-2dfa1517d5ac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.567191 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99e56db9-b380-4e21-9810-2dfa1517d5ac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.567611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99e56db9-b380-4e21-9810-2dfa1517d5ac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.574808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99e56db9-b380-4e21-9810-2dfa1517d5ac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.579175 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.584502 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbc8r\" (UniqueName: \"kubernetes.io/projected/99e56db9-b380-4e21-9810-2dfa1517d5ac-kube-api-access-sbc8r\") pod \"ovnkube-control-plane-749d76644c-7x9tl\" (UID: \"99e56db9-b380-4e21-9810-2dfa1517d5ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.584693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.584729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.584743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.584759 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.584772 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.598719 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.612168 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.624610 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.641319 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.653877 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.664956 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.675910 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.687411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.687467 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.687483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.687505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.687521 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.699790 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.715062 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.729518 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.740943 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.752934 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.758218 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.767761 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: W0127 18:43:16.770465 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99e56db9_b380_4e21_9810_2dfa1517d5ac.slice/crio-89c01581623c7e8e86e150350bacf99a7f403b1c875cfaa4096e53fc17b9c7d6 WatchSource:0}: Error finding container 89c01581623c7e8e86e150350bacf99a7f403b1c875cfaa4096e53fc17b9c7d6: Status 404 returned error can't find the container with id 89c01581623c7e8e86e150350bacf99a7f403b1c875cfaa4096e53fc17b9c7d6 Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.786563 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.790522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.790550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.790560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.790577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.790590 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.802274 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.817525 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.841020 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.855709 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.871902 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.893913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.893952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.893962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.893977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.893987 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.997732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.998021 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.998032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.998047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4853]: I0127 18:43:16.998059 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.090982 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:22:53.444038863 +0000 UTC Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.100554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.100581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.100589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.100602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.100612 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.112258 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:17 crc kubenswrapper[4853]: E0127 18:43:17.112384 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.112501 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.112551 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:17 crc kubenswrapper[4853]: E0127 18:43:17.112565 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:17 crc kubenswrapper[4853]: E0127 18:43:17.112671 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.203441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.203475 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.203485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.203499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.203510 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.305438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.305479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.305487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.305500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.305511 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.397449 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/1.log" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.400469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" event={"ID":"99e56db9-b380-4e21-9810-2dfa1517d5ac","Type":"ContainerStarted","Data":"ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.400505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" event={"ID":"99e56db9-b380-4e21-9810-2dfa1517d5ac","Type":"ContainerStarted","Data":"94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.400515 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" event={"ID":"99e56db9-b380-4e21-9810-2dfa1517d5ac","Type":"ContainerStarted","Data":"89c01581623c7e8e86e150350bacf99a7f403b1c875cfaa4096e53fc17b9c7d6"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.408523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.408550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.408579 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.408595 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.408605 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.416387 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.429660 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.448385 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.457596 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.474402 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.490005 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.500842 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.511473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.511507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.511518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.511531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.511540 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.512135 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.521981 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wdzg4"] Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.522585 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:17 crc kubenswrapper[4853]: E0127 18:43:17.522650 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.526809 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.537677 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.553329 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.563181 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.573177 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.575822 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.575908 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqf7\" (UniqueName: \"kubernetes.io/projected/29407244-fbfe-4d37-a33e-7d59df1c22fd-kube-api-access-hgqf7\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.585022 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.598534 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.611908 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.613414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.613448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.613461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.613476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.613488 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.633585 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.656025 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.665734 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.676914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.676985 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgqf7\" (UniqueName: \"kubernetes.io/projected/29407244-fbfe-4d37-a33e-7d59df1c22fd-kube-api-access-hgqf7\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:17 crc kubenswrapper[4853]: E0127 18:43:17.677130 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:17 crc kubenswrapper[4853]: E0127 18:43:17.677209 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:43:18.17718931 +0000 UTC m=+40.639732193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.678709 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.690011 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.694161 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgqf7\" (UniqueName: \"kubernetes.io/projected/29407244-fbfe-4d37-a33e-7d59df1c22fd-kube-api-access-hgqf7\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.699696 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.707726 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.715020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.715055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.715066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.715084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.715095 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.717072 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.733440 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.743726 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.754594 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.765298 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.777366 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.789276 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.802944 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.817003 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.818223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.818265 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.818278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.818297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.818313 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.829946 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.921648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.921678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.921687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.921700 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4853]: I0127 18:43:17.921711 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.024691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.024756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.024779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.024808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.024830 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.091991 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:28:01.15848626 +0000 UTC Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.126853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.126897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.126910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.126927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.126941 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.132158 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.149976 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.181739 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:18 crc kubenswrapper[4853]: E0127 18:43:18.181953 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:18 crc kubenswrapper[4853]: E0127 18:43:18.182030 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:43:19.182005402 +0000 UTC m=+41.644548295 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.205819 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.224715 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.228510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.228558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.228567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.228581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.228591 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.238113 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.248971 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.265700 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.291838 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.312298 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.323217 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.330710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.330739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.330754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.330767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.330777 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.333820 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.349357 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.358431 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.369041 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.380629 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.394174 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.409317 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.432904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.432947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.432956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.432968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.432977 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.535223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.535508 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.535634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.535785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.535896 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.638966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.639206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.639313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.639376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.639430 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.742412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.742460 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.742472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.742490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.742504 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.845210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.845270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.845289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.845312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.845329 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.947350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.947393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.947404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.947419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4853]: I0127 18:43:18.947428 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.050954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.051218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.051303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.051382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.051511 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.092768 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:21:21.909122902 +0000 UTC Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.112280 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.112281 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.112292 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:19 crc kubenswrapper[4853]: E0127 18:43:19.112947 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.112380 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:19 crc kubenswrapper[4853]: E0127 18:43:19.113163 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:19 crc kubenswrapper[4853]: E0127 18:43:19.113279 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:19 crc kubenswrapper[4853]: E0127 18:43:19.113617 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.155853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.155919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.155942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.155970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.155992 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.191865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:19 crc kubenswrapper[4853]: E0127 18:43:19.192200 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:19 crc kubenswrapper[4853]: E0127 18:43:19.192278 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:43:21.192255395 +0000 UTC m=+43.654798308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.258717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.258750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.258758 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.258776 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.258786 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.362397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.362469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.362487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.362513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.362530 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.465326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.465398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.465415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.465440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.465462 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.568598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.568642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.568654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.568674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.568690 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.671459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.671492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.671502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.671519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.671532 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.774104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.774366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.774545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.774761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.774977 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.878272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.878331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.878353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.878383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.878406 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.981259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.981325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.981355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.981376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4853]: I0127 18:43:19.981416 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.084521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.084840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.085030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.085254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.085436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.093820 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:40:30.535697912 +0000 UTC Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.187774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.187990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.188051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.188109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.188195 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.290741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.291026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.291094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.291185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.291253 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.393370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.393429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.393466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.393507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.393534 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.496495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.496529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.496540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.496556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.496568 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.599432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.599461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.599468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.599480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.599489 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.701971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.702005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.702041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.702061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.702073 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.804603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.804655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.804667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.804684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.804696 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.907199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.907248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.907260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.907277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4853]: I0127 18:43:20.907290 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.009626 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.009889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.010014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.010176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.010282 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.094245 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:11:20.205063594 +0000 UTC Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.112018 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:21 crc kubenswrapper[4853]: E0127 18:43:21.112433 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.112471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:21 crc kubenswrapper[4853]: E0127 18:43:21.112615 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.112718 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:21 crc kubenswrapper[4853]: E0127 18:43:21.112831 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.112949 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:21 crc kubenswrapper[4853]: E0127 18:43:21.113085 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.113995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.114073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.114098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.114166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.114196 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: E0127 18:43:21.213844 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:21 crc kubenswrapper[4853]: E0127 18:43:21.213920 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:43:25.213903323 +0000 UTC m=+47.676446226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.213954 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.216106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.216218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.216241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.216263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.216279 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.319145 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.319178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.319190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.319210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.319239 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.421307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.421607 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.421750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.421893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.422072 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.525836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.526219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.526363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.526515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.526726 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.629306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.629352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.629361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.629373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.629382 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.731833 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.732148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.732280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.732417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.732549 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.835579 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.835610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.835638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.835655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.835667 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.938038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.938087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.938101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.938148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4853]: I0127 18:43:21.938167 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.041229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.041523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.041647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.041813 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.041975 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.095366 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:58:43.653649754 +0000 UTC Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.143871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.143938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.143948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.143979 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.143991 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.246889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.246942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.246952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.246973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.246984 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.350172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.350275 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.350373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.350403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.350454 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.453737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.453784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.453795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.453810 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.453822 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.556731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.556823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.557031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.557067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.557092 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.660283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.660362 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.660385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.660416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.660438 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.763767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.763825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.763846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.763870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.763889 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.866046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.866431 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.866442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.866456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.866465 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.969587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.969671 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.969693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.969723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4853]: I0127 18:43:22.969749 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.072646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.072899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.072981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.073101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.073204 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.096214 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:45:52.07485216 +0000 UTC Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.112160 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.112220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:23 crc kubenswrapper[4853]: E0127 18:43:23.112319 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.112354 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:23 crc kubenswrapper[4853]: E0127 18:43:23.112486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:23 crc kubenswrapper[4853]: E0127 18:43:23.112590 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.112891 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:23 crc kubenswrapper[4853]: E0127 18:43:23.113233 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.176494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.176832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.176990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.177251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.177433 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.281801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.281864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.281885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.281913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.281937 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.385100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.385196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.385219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.385255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.385277 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.488108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.488219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.488240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.488269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.488290 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.591282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.591384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.591405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.591437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.591462 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.694903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.694957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.694969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.694988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.695003 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.797986 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.798045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.798063 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.798086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.798105 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.900723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.900786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.900803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.900825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4853]: I0127 18:43:23.900843 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.003731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.003811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.003836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.003865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.003888 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.096384 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:10:02.611891192 +0000 UTC Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.106213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.106260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.106277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.106298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.106312 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.142780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.142840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.142857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.142882 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.142898 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: E0127 18:43:24.158795 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.163745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.163798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.163811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.163829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.163841 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: E0127 18:43:24.182518 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.188212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.188266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.188283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.188307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.188325 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: E0127 18:43:24.208627 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.213758 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.213811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.213829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.213854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.213872 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: E0127 18:43:24.229849 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.233928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.233963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.233977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.233994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.234007 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: E0127 18:43:24.247886 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:24 crc kubenswrapper[4853]: E0127 18:43:24.248014 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.249662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.249715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.249724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.249740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.249751 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.352437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.352480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.352488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.352503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.352513 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.455675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.455735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.455750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.455774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.455790 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.558778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.558807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.558815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.558828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.558839 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.662661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.662735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.662749 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.662769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.662785 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.766544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.766626 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.766647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.766676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.766702 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.870462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.870543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.870563 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.870592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.870614 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.975079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.975161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.975172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.975189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4853]: I0127 18:43:24.975206 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.078537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.078592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.078606 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.078625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.078641 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.097307 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:18:00.454651281 +0000 UTC Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.111797 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.111861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.111861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.111861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:25 crc kubenswrapper[4853]: E0127 18:43:25.111986 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:25 crc kubenswrapper[4853]: E0127 18:43:25.112077 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:25 crc kubenswrapper[4853]: E0127 18:43:25.112143 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:25 crc kubenswrapper[4853]: E0127 18:43:25.112211 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.181399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.181478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.181507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.181542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.181573 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.260939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:25 crc kubenswrapper[4853]: E0127 18:43:25.261340 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:25 crc kubenswrapper[4853]: E0127 18:43:25.261402 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:43:33.261385949 +0000 UTC m=+55.723928832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.284806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.284877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.284889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.284915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.284930 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.388651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.388730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.388754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.388790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.388812 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.491790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.491871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.491894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.491928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.491954 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.595009 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.595081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.595105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.595178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.595204 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.698224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.698280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.698293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.698313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.698326 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.801694 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.801753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.801772 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.801793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.801806 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.905029 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.905083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.905093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.905110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4853]: I0127 18:43:25.905154 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.008577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.008628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.008639 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.008656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.008668 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.098615 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:53:47.98143734 +0000 UTC Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.111798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.111849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.111860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.111910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.111921 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.215059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.215187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.215209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.215241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.215261 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.317829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.318355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.318719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.318825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.318912 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.376592 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.393639 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.418535 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.425804 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.425875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.425894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.425921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.425940 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.433698 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.452610 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.467529 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.489216 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.505545 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.519563 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.528902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.528957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.528965 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.528981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.529016 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.533694 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.565011 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.578907 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.594017 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.611377 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.624537 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.631436 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.631474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.631509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.631530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.631542 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.639697 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.652527 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.665407 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.734270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.734328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.734347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.734370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.734387 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.837192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.837262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.837301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.837319 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.837331 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.940340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.940374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.940386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.940404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4853]: I0127 18:43:26.940415 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.043280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.043328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.043340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.043355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.043364 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.099315 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:34:46.29580883 +0000 UTC Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.111832 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.111876 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.111876 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.111912 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:27 crc kubenswrapper[4853]: E0127 18:43:27.112012 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:27 crc kubenswrapper[4853]: E0127 18:43:27.112268 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:27 crc kubenswrapper[4853]: E0127 18:43:27.112402 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:27 crc kubenswrapper[4853]: E0127 18:43:27.112490 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.146032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.146070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.146078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.146092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.146101 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.249570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.249654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.249675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.249708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.249731 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.352429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.352486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.352499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.352524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.352540 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.455208 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.455295 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.455318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.455350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.455370 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.558968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.559047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.559064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.559095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.559115 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.662157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.662243 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.662257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.662282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.662297 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.765144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.765202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.765214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.765236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.765250 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.869356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.869434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.869456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.869484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.869504 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.974046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.974132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.974146 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.974166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4853]: I0127 18:43:27.974187 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.077139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.077189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.077200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.077216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.077227 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.099824 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:35:59.097635367 +0000 UTC Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.113601 4853 scope.go:117] "RemoveContainer" containerID="67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.125280 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.143419 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.157917 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.168098 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.177959 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.181130 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.181161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.181170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.181380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.181389 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.201430 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.212220 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.226863 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.243195 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.254074 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.264331 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.274693 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.283561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.283597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.283607 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.283637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.283647 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.292697 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474b23d45ca214a859faee68cfad6bf9e641b0e682b3e11f89e6b6994c75a544\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:13Z\\\",\\\"message\\\":\\\"pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:13.342490 6202 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 18:43:13.342789 6202 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.342924 6202 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 18:43:13.343244 6202 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:13.343274 6202 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:43:13.343282 6202 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:43:13.343329 6202 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:43:13.343344 6202 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:43:13.343333 6202 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:13.343357 6202 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:43:13.343387 6202 factory.go:656] Stopping watch factory\\\\nI0127 18:43:13.343409 6202 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.306277 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.319056 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.333863 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.346491 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.360447 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.372933 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.386296 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.386335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.386344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.386359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.386368 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.398632 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.410208 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.427870 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.439479 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.448862 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/1.log" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.451768 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.454158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.454395 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.464186 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.473361 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.482011 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.488721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.488752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.488761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.488775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.488785 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.492535 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.503343 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.516087 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.528618 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.542401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.556899 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.569641 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.589516 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.590794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.590821 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.590832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.590847 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.590858 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.612968 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.631571 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.647387 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.661900 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.672973 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.683546 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.693412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.693618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.693685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.693750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.693805 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.695861 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.715283 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.725838 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.738390 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.750311 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.765654 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.788487 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.796333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.796367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.796378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.796394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.796405 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.800252 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.812708 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.823438 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.898353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.898392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.898402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.898418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4853]: I0127 18:43:28.898431 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.000305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.000352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.000366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.000384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.000399 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.004578 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.004698 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.004766 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.004859 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.004864 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:01.004803221 +0000 UTC m=+83.467346104 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.004911 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:01.004900104 +0000 UTC m=+83.467442987 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.005074 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.005229 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:01.005208542 +0000 UTC m=+83.467751425 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.026002 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.038067 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.040658 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.054637 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.067674 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.080215 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.094334 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.100008 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:53:56.300682651 +0000 UTC Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.102682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.102738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.102752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.102769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.102780 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.106424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.106485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.106656 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.106684 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.106700 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.106750 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:01.106732152 +0000 UTC m=+83.569275035 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.107035 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.107136 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.107215 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.107406 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:01.10737605 +0000 UTC m=+83.569918933 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.111771 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.112396 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.112433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.112432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.112398 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.112505 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.112574 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.112657 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.112747 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.125366 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.153749 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.168244 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.194098 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.205335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.205421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.205439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.205461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.205498 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.206561 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.222451 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.238594 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.250929 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.264642 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.276341 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.285442 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.289998 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.307641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.307817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.307921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.308060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.308648 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.410865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.411163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.411257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.411344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.411455 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.457345 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/2.log" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.458013 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/1.log" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.460442 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0" exitCode=1 Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.461675 4853 scope.go:117] "RemoveContainer" containerID="9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.461807 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0"} Jan 27 18:43:29 crc kubenswrapper[4853]: E0127 18:43:29.461829 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.461968 4853 scope.go:117] "RemoveContainer" containerID="67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.476853 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.496558 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.507896 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.513349 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.513386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.513398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.513416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.513429 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.517867 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.532168 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.544269 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.553878 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.561946 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.571222 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.586684 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67f2a4c435a7d4fb4616583e4e2c87238448540b0df9b0733e31e1e67b375982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:15Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:43:15.183193 6344 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:43:15.183213 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:43:15.183246 6344 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:43:15.183257 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:43:15.183276 6344 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:43:15.183289 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 18:43:15.183292 6344 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:43:15.183317 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:43:15.183335 6344 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:43:15.183339 6344 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:43:15.183358 6344 factory.go:656] Stopping watch factory\\\\nI0127 18:43:15.183371 6344 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:43:15.183373 6344 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18:43:15.183372 6344 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:43:15.183379 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.600642 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.617813 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.619144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.619235 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.619395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.620141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.620239 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.630583 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.641569 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.653894 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.666675 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.678868 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.690044 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.721955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.721996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.722011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.722026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.722036 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.824257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.824465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.824560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.824670 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.824833 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.927388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.927593 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.927688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.927788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4853]: I0127 18:43:29.927894 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.030495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.030795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.030993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.031206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.031411 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.100997 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:34:28.113913136 +0000 UTC Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.136263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.136618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.136813 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.137008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.137259 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.244836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.245603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.245761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.245853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.245926 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.348662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.348703 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.348711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.348725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.348735 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.451149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.451212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.451229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.451254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.451271 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.465793 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/2.log" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.471424 4853 scope.go:117] "RemoveContainer" containerID="9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0" Jan 27 18:43:30 crc kubenswrapper[4853]: E0127 18:43:30.471566 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.502307 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.516946 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.531046 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.548833 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.554481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.554524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.554535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.554553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.554565 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.563155 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.575640 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.589649 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.604276 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.616521 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.629695 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.641161 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.655062 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.656925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.657039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.657053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.657071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.657103 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.667378 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.681743 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.699499 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.722720 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.736956 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.749112 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.760065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.760108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.760139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.760160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.760174 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.862647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.862687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.862698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.862714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.862724 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.964688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.964729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.964741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.964757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4853]: I0127 18:43:30.964768 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.067500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.067572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.067583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.067598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.067608 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.101479 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:23:08.037421483 +0000 UTC Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.111842 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.111906 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:31 crc kubenswrapper[4853]: E0127 18:43:31.111980 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.112045 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.112053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:31 crc kubenswrapper[4853]: E0127 18:43:31.112177 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:31 crc kubenswrapper[4853]: E0127 18:43:31.112317 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:31 crc kubenswrapper[4853]: E0127 18:43:31.112409 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.170383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.170418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.170427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.170441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.170451 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.272808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.272851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.272866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.272886 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.272900 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.375547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.375595 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.375611 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.375629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.375641 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.478288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.478346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.478375 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.478406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.478425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.581975 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.582020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.582041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.582069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.582086 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.685034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.685073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.685085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.685101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.685112 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.787108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.787164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.787176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.787191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.787202 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.891276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.891322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.891333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.891350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.891361 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.993014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.993050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.993059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.993073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4853]: I0127 18:43:31.993082 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.094620 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.094649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.094658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.094669 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.094677 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.102146 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:00:52.946527416 +0000 UTC Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.196880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.196905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.196914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.196927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.196935 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.299692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.299726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.299737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.299752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.299762 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.402195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.402238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.402249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.402265 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.402277 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.505035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.505182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.505202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.505231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.505251 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.607370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.607455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.607479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.607514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.607538 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.710875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.710950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.710969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.711000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.711023 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.813967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.814051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.814071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.814101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.814147 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.920042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.920091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.920109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.920163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4853]: I0127 18:43:32.920186 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.023620 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.023656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.023664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.023675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.023683 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.103003 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:53:11.718790728 +0000 UTC Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.112294 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.112334 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.112373 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:33 crc kubenswrapper[4853]: E0127 18:43:33.112406 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:33 crc kubenswrapper[4853]: E0127 18:43:33.112542 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:33 crc kubenswrapper[4853]: E0127 18:43:33.112578 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.112556 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:33 crc kubenswrapper[4853]: E0127 18:43:33.112686 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.126116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.126196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.126213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.126237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.126260 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.229652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.229699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.229710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.229726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.229738 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.332336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.332406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.332423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.332447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.332463 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.350801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:33 crc kubenswrapper[4853]: E0127 18:43:33.350978 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:33 crc kubenswrapper[4853]: E0127 18:43:33.351038 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:43:49.351020411 +0000 UTC m=+71.813563304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.435689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.435734 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.435746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.435764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.435776 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.540325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.540378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.540394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.540420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.540439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.643230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.643630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.643950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.644150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.644302 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.747435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.747488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.747503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.747524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.747541 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.849511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.849716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.849775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.849834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.849890 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.952585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.952824 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.952905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.952991 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4853]: I0127 18:43:33.953072 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.055025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.055073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.055083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.055099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.055111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.104148 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:53:50.608448906 +0000 UTC Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.158224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.158499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.158567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.158646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.158713 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.261394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.261443 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.261454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.261474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.261483 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.327029 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.327247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.327336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.327412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.327498 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: E0127 18:43:34.339040 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.342266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.342295 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.342302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.342313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.342321 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: E0127 18:43:34.352972 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.356483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.356511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.356519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.356531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.356539 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: E0127 18:43:34.366640 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.369563 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.369592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.369602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.369616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.369628 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: E0127 18:43:34.382066 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.385156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.385190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.385199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.385213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.385225 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: E0127 18:43:34.400497 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:34Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:34 crc kubenswrapper[4853]: E0127 18:43:34.400800 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.402423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.402455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.402466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.402480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.402489 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.505160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.505425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.505434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.505448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.505456 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.607857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.607908 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.607923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.607939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.607952 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.709562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.709594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.709603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.709617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.709626 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.812198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.812249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.812264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.812286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.812303 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.914020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.914052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.914062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.914077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4853]: I0127 18:43:34.914087 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.016697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.016723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.016740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.016752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.016765 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.105180 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:10:45.24250622 +0000 UTC Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.111458 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.111486 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.111523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.111583 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:35 crc kubenswrapper[4853]: E0127 18:43:35.111582 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:35 crc kubenswrapper[4853]: E0127 18:43:35.111716 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:35 crc kubenswrapper[4853]: E0127 18:43:35.111762 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:35 crc kubenswrapper[4853]: E0127 18:43:35.111803 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.118361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.118417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.118441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.118469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.118511 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.220852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.220895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.220908 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.220923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.220933 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.323628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.323678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.323692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.323711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.323723 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.426199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.426240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.426251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.426266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.426275 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.528271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.528315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.528326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.528343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.528354 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.632950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.632999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.633008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.633026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.633037 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.736091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.736139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.736148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.736161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.736172 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.839030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.839076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.839088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.839105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.839115 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.941640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.941686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.941697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.941714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4853]: I0127 18:43:35.941727 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.044398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.044480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.044504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.044536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.044558 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.106062 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:41:12.090079468 +0000 UTC Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.147487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.147562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.147585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.147616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.147642 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.250871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.250923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.250935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.250954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.250966 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.353706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.353771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.353796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.353825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.353847 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.456033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.456095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.456149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.456174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.456200 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.558273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.558354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.558377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.558404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.558425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.661469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.661525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.661540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.661556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.661567 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.764705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.764779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.764801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.764830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.764851 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.868079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.868214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.868250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.868278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.868299 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.970914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.970960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.970968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.970983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4853]: I0127 18:43:36.970993 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.073509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.073553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.073567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.073582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.073595 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.106628 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:44:26.823958031 +0000 UTC Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.111956 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.111975 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.112071 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.112047 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:37 crc kubenswrapper[4853]: E0127 18:43:37.112298 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:37 crc kubenswrapper[4853]: E0127 18:43:37.112538 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:37 crc kubenswrapper[4853]: E0127 18:43:37.112653 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:37 crc kubenswrapper[4853]: E0127 18:43:37.112733 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.176897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.176934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.176942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.176956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.176965 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.279021 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.279079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.279097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.279150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.279169 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.381021 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.381058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.381068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.381083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.381106 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.484635 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.484698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.484717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.484739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.484755 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.587766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.587825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.587841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.587863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.587879 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.691260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.691335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.691369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.691398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.691419 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.793852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.793934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.793958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.793988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.794011 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.896701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.896759 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.896784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.896807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.896822 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.998863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.998906 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.998919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.998933 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4853]: I0127 18:43:37.998942 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.101964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.102022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.102039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.102061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.102076 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.106850 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:36:33.358692818 +0000 UTC Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.128027 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.146055 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.157437 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.167576 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.181147 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.192343 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.203627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.203699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.203722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.203753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.203775 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.213969 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.226031 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.234968 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.244284 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.266235 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.278471 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.290703 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.303526 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.306775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.306818 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.306868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.306892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.306908 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.317235 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.330979 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.344889 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.355151 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:38Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.408552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.408586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.408596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.408612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.408622 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.511531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.511597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.511614 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.511639 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.511657 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.614273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.614311 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.614323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.614339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.614352 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.716687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.716720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.716728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.716740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.716750 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.819098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.819160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.819173 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.819188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.819199 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.921531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.921559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.921566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.921578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4853]: I0127 18:43:38.921588 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.024446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.024490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.024498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.024511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.024521 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.108611 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:38:35.248663696 +0000 UTC Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.111861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:39 crc kubenswrapper[4853]: E0127 18:43:39.112297 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.111915 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:39 crc kubenswrapper[4853]: E0127 18:43:39.112385 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.111876 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.111961 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:39 crc kubenswrapper[4853]: E0127 18:43:39.112466 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:39 crc kubenswrapper[4853]: E0127 18:43:39.112562 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.126149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.126188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.126196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.126209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.126219 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.228542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.228617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.228628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.228645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.228656 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.331845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.331948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.331982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.332006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.332022 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.434890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.434942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.434959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.434982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.435001 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.538561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.538629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.538638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.538654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.538664 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.640803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.640860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.640873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.640890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.640900 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.743897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.743957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.743969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.743982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.743992 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.846397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.846444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.846454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.846470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.846483 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.949408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.949447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.949457 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.949475 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4853]: I0127 18:43:39.949487 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.052233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.052272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.052281 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.052294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.052302 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.109226 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:42:47.781797417 +0000 UTC Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.154263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.154308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.154319 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.154334 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.154346 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.256599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.256685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.256705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.256725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.256736 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.359557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.359863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.359943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.360009 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.360068 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.462103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.462154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.462165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.462181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.462191 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.564216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.564487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.564616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.564826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.564975 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.667809 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.667855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.667870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.667891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.667906 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.770137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.770162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.770174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.770203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.770219 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.872761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.872816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.872830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.872857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.872874 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.975543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.975583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.975596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.975612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4853]: I0127 18:43:40.975624 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.078478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.078522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.078535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.078550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.078561 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.109913 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:19:19.203731232 +0000 UTC Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.112200 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.112272 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.112300 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:41 crc kubenswrapper[4853]: E0127 18:43:41.112297 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.112202 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:41 crc kubenswrapper[4853]: E0127 18:43:41.112406 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:41 crc kubenswrapper[4853]: E0127 18:43:41.112516 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:41 crc kubenswrapper[4853]: E0127 18:43:41.112601 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.180645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.180741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.180757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.180775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.180785 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.282903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.282942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.282955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.282969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.282978 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.385137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.385178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.385187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.385202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.385212 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.487330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.487366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.487376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.487391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.487402 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.590086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.590239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.590278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.590324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.590353 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.693315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.693350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.693361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.693377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.693388 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.796525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.796599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.796612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.796631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.796641 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.899668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.899731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.899745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.899768 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4853]: I0127 18:43:41.899784 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.002258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.002309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.002322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.002338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.002349 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.105068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.105113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.105137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.105151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.105163 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.110236 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:44:59.548485085 +0000 UTC Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.207093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.207154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.207163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.207176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.207185 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.310236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.310276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.310286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.310300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.310309 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.412556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.412593 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.412602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.412613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.412622 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.515190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.515244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.515255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.515272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.515285 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.618327 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.618376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.618387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.618405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.618416 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.721547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.721626 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.721648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.721676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.721694 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.823748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.823788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.823799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.823818 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.823826 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.926285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.926375 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.926388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.926411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4853]: I0127 18:43:42.926424 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.028993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.029049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.029060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.029077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.029089 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.111352 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:05:00.221883449 +0000 UTC Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.111492 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.111496 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.111524 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.111597 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:43 crc kubenswrapper[4853]: E0127 18:43:43.111732 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:43 crc kubenswrapper[4853]: E0127 18:43:43.112195 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:43 crc kubenswrapper[4853]: E0127 18:43:43.112167 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:43 crc kubenswrapper[4853]: E0127 18:43:43.112271 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.131082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.131142 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.131155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.131174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.131186 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.233081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.233143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.233153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.233169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.233180 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.334964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.334997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.335006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.335019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.335029 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.436736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.436768 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.436778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.436790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.436798 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.539074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.539112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.539144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.539158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.539167 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.641455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.641485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.641493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.641505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.641516 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.743536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.743581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.743592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.743614 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.743625 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.845506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.845533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.845545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.845556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.845565 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.947853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.947939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.947956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.947978 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:43 crc kubenswrapper[4853]: I0127 18:43:43.947987 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:43Z","lastTransitionTime":"2026-01-27T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.050104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.050150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.050159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.050171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.050180 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.112460 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:36:46.627491124 +0000 UTC Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.113094 4853 scope.go:117] "RemoveContainer" containerID="9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0" Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.113273 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.152006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.152040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.152050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.152063 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.152072 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.254761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.254806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.254817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.254834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.254846 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.356863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.356900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.356911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.356923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.356932 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.435666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.435712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.435723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.435736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.435747 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.448544 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:44Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.451656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.451688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.451702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.451715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.451726 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.461989 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:44Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.466423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.466458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.466478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.466497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.466509 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.478161 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:44Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.481922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.481953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.481961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.481974 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.481983 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.494484 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:44Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.497196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.497218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.497232 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.497248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.497259 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.547694 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:44Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:44 crc kubenswrapper[4853]: E0127 18:43:44.547817 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.549062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.549086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.549094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.549107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.549116 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.651425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.651464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.651476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.651492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.651503 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.754084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.754135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.754147 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.754162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.754172 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.857056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.857098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.857112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.857156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.857168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.959953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.959991 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.960000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.960012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:44 crc kubenswrapper[4853]: I0127 18:43:44.960020 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:44Z","lastTransitionTime":"2026-01-27T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.063325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.063632 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.063770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.063907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.064076 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.112220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.112318 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.112331 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:45 crc kubenswrapper[4853]: E0127 18:43:45.112362 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:45 crc kubenswrapper[4853]: E0127 18:43:45.112464 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.112529 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:52:21.994885998 +0000 UTC Jan 27 18:43:45 crc kubenswrapper[4853]: E0127 18:43:45.112580 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.114026 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:45 crc kubenswrapper[4853]: E0127 18:43:45.114318 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.166909 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.166933 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.166941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.166953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.166961 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.268877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.268907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.268915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.268927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.268935 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.370931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.370972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.370983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.370997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.371005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.473292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.473330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.473338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.473352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.473363 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.575576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.575648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.575662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.575676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.575706 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.678065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.678094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.678104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.678131 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.678141 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.780259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.780522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.780594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.780658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.780716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.883544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.883603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.883616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.883636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.883647 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.985928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.986198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.986276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.986342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:45 crc kubenswrapper[4853]: I0127 18:43:45.986441 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:45Z","lastTransitionTime":"2026-01-27T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.088469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.088502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.088511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.088527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.088553 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.113313 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:06:19.842331745 +0000 UTC Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.190927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.190958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.190967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.190982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.190991 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.293342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.293371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.293381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.293393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.293402 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.398342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.398392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.398402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.398423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.398433 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.500786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.500829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.500841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.500860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.500871 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.603932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.604186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.604248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.604354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.604421 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.706179 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.706482 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.706584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.706676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.706761 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.808784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.808832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.808841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.808855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.808865 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.910889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.911181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.911305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.911391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:46 crc kubenswrapper[4853]: I0127 18:43:46.911479 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:46Z","lastTransitionTime":"2026-01-27T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.013899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.013946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.013961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.013978 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.013992 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.112139 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.112258 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:47 crc kubenswrapper[4853]: E0127 18:43:47.112340 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.112350 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:47 crc kubenswrapper[4853]: E0127 18:43:47.112418 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.112951 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:47 crc kubenswrapper[4853]: E0127 18:43:47.113180 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:47 crc kubenswrapper[4853]: E0127 18:43:47.113341 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.113375 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:56:42.747117654 +0000 UTC Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.115801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.115906 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.115917 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.115956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.115970 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.125749 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.218506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.218801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.218869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.218938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.219010 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.321550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.321594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.321603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.321618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.321628 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.424308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.424355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.424367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.424384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.424397 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.526663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.526894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.526993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.527061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.527147 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.629727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.629762 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.629771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.629785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.629796 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.732427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.732469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.732481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.732498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.732509 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.834317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.834357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.834365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.834381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.834389 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.937018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.937046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.937054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.937067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:47 crc kubenswrapper[4853]: I0127 18:43:47.937076 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:47Z","lastTransitionTime":"2026-01-27T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.039614 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.039943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.040033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.040168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.040275 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.114255 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:15:33.074107903 +0000 UTC Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.123806 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.135419 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.142345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.142416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.142430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.142448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.142459 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.146166 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.155960 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.165838 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.176375 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.188686 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.205452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.219041 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.231840 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.244426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.244469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.244483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.244499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.244510 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.252446 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.262285 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.271596 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.286736 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.296267 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.309478 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.320007 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.340013 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.346445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.346606 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.346673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.346766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.346840 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.349044 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:48Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.448889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.448924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.448934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.448947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.448957 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.550812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.550878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.550890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.550935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.550952 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.653520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.653557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.653568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.653585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.653599 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.755693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.755942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.756031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.756140 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.756234 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.859035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.859075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.859084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.859100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.859110 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.961682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.962015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.962168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.962274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:48 crc kubenswrapper[4853]: I0127 18:43:48.962356 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:48Z","lastTransitionTime":"2026-01-27T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.064243 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.064289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.064301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.064318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.064330 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.111939 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.111980 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.111986 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:49 crc kubenswrapper[4853]: E0127 18:43:49.112062 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:49 crc kubenswrapper[4853]: E0127 18:43:49.112207 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:49 crc kubenswrapper[4853]: E0127 18:43:49.112256 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.112560 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:49 crc kubenswrapper[4853]: E0127 18:43:49.112772 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.114369 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:38:28.008499958 +0000 UTC Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.167108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.167199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.167222 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.167250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.167273 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.269264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.269316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.269328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.269354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.269365 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.376061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.376140 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.376156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.376174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.376242 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.418026 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:49 crc kubenswrapper[4853]: E0127 18:43:49.418201 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:49 crc kubenswrapper[4853]: E0127 18:43:49.418263 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.418247039 +0000 UTC m=+103.880789922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.478880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.478913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.478922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.478934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.478943 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.580968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.581019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.581032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.581049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.581063 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.684220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.684263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.684272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.684286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.684295 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.786682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.786722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.786731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.786744 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.786754 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.889030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.889076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.889088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.889104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.889113 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.991175 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.991218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.991231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.991250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:49 crc kubenswrapper[4853]: I0127 18:43:49.991262 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:49Z","lastTransitionTime":"2026-01-27T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.093304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.093543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.093630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.093725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.093814 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.114661 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:01:29.442587177 +0000 UTC Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.196052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.196701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.196795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.196913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.197006 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.298904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.298937 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.298947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.298960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.298969 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.401222 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.401262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.401273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.401290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.401306 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.503545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.503573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.503581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.503593 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.503602 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.559872 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/0.log" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.559921 4853 generic.go:334] "Generic (PLEG): container finished" podID="dd2c07de-2ac9-4074-9fb0-519cfaf37f69" containerID="9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430" exitCode=1 Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.559951 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerDied","Data":"9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.560389 4853 scope.go:117] "RemoveContainer" containerID="9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.575612 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.585898 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.603924 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.604953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.604995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.605006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.605027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.605038 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.617512 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.636930 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.649541 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.660865 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.689780 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.700983 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.707182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.707223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.707233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.707245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.707255 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.714912 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.725934 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.737010 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.745589 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.755831 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.769535 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.784167 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"2026-01-27T18:43:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e\\\\n2026-01-27T18:43:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e to /host/opt/cni/bin/\\\\n2026-01-27T18:43:05Z [verbose] multus-daemon started\\\\n2026-01-27T18:43:05Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:43:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.793965 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.805835 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.809510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.809536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.809545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.809561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.809569 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.815675 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.911326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.911373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.911384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.911398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:50 crc kubenswrapper[4853]: I0127 18:43:50.911407 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:50Z","lastTransitionTime":"2026-01-27T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.013209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.013255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.013266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.013281 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.013291 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.112147 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.112210 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.112235 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:51 crc kubenswrapper[4853]: E0127 18:43:51.112277 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.112220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:51 crc kubenswrapper[4853]: E0127 18:43:51.112347 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:51 crc kubenswrapper[4853]: E0127 18:43:51.112413 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:51 crc kubenswrapper[4853]: E0127 18:43:51.112488 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.115164 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:40:39.48253018 +0000 UTC Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.115539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.115567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.115578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.115596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.115605 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.217308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.217338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.217347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.217360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.217371 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.319281 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.319326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.319336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.319353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.319364 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.422013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.422059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.422073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.422091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.422104 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.523943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.523982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.523994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.524008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.524021 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.564359 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/0.log" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.564408 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerStarted","Data":"d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.576074 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.586710 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.597134 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.608979 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.619193 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.626745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.626773 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.626782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.626796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.626804 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.629905 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.645985 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"2026-01-27T18:43:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e\\\\n2026-01-27T18:43:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e to /host/opt/cni/bin/\\\\n2026-01-27T18:43:05Z [verbose] multus-daemon started\\\\n2026-01-27T18:43:05Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:43:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.658732 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.671451 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.690701 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.702364 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.714283 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.729185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.729230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.729244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.729259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.729269 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.730257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.739365 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.749482 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.760481 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.768382 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.781283 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.797965 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:51Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.831879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.831908 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.831916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.831929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.831938 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.933829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.933880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.933896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.933919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:51 crc kubenswrapper[4853]: I0127 18:43:51.933948 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:51Z","lastTransitionTime":"2026-01-27T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.036546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.036618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.036630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.036647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.036659 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.115396 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:21:41.139459388 +0000 UTC Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.138570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.138617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.138628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.138646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.138659 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.241306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.241371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.241384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.241403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.241415 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.343737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.343780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.343791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.343807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.343820 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.446382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.446442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.446461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.446484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.446502 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.548466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.548500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.548508 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.548521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.548528 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.650440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.650487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.650500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.650516 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.650526 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.753307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.753348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.753358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.753372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.753382 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.855643 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.855753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.855779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.855812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.855836 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.958634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.958707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.958726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.958750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:52 crc kubenswrapper[4853]: I0127 18:43:52.958768 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:52Z","lastTransitionTime":"2026-01-27T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.064249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.064309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.064326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.064350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.064368 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.112510 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.112592 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.112639 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.112533 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:53 crc kubenswrapper[4853]: E0127 18:43:53.112698 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:53 crc kubenswrapper[4853]: E0127 18:43:53.112837 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:53 crc kubenswrapper[4853]: E0127 18:43:53.112942 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:53 crc kubenswrapper[4853]: E0127 18:43:53.113024 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.116504 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:19:08.864099826 +0000 UTC Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.168144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.168178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.168191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.168206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.168219 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.271042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.271092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.271101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.271132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.271143 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.374057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.374096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.374104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.374132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.374146 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.477664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.477707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.477716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.477731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.477741 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.580649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.580693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.580706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.580727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.580740 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.684031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.684082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.684095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.684138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.684156 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.787004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.787072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.787092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.787146 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.787167 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.890661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.890718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.890731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.890749 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.890760 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.994400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.994452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.994470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.994536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:53 crc kubenswrapper[4853]: I0127 18:43:53.994560 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:53Z","lastTransitionTime":"2026-01-27T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.097357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.097409 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.097442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.097458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.097467 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.116627 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:49:18.219876099 +0000 UTC Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.200096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.200162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.200179 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.200195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.200206 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.310354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.310420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.310433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.310452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.310465 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.413747 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.413796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.413808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.413829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.413844 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.517712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.518216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.518226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.518241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.518252 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.621572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.621616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.621627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.621641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.621651 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.724360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.724453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.724480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.724517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.724579 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.826848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.826898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.826907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.826928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.826938 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.930005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.930062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.930074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.930093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.930106 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.942372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.942686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.942738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.942782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.942828 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: E0127 18:43:54.964456 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:54Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.968691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.968717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.968728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.968741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.968750 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:54 crc kubenswrapper[4853]: E0127 18:43:54.989627 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:54Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.993693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.993719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.993728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.993741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:54 crc kubenswrapper[4853]: I0127 18:43:54.993759 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:54Z","lastTransitionTime":"2026-01-27T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.015716 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.020588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.020626 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.020639 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.020656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.020667 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.037636 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.042535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.042597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.042616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.042641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.042660 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.062535 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:55Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.062773 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.065203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.065294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.065309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.065326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.065339 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.112486 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.112568 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.112657 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.112673 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.112670 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.112768 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.112909 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:55 crc kubenswrapper[4853]: E0127 18:43:55.113028 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.116885 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:55:57.655846175 +0000 UTC Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.168413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.168464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.168484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.168509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.168525 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.271668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.271735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.271758 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.271788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.271811 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.375088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.375166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.375175 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.375189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.375199 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.477989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.478050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.478067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.478092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.478113 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.580113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.580180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.580191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.580207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.580218 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.683717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.683788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.683811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.683920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.684008 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.787188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.787243 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.787256 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.787276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.787291 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.893367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.893453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.893476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.893505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.893523 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.996751 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.996815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.996854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.996876 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:55 crc kubenswrapper[4853]: I0127 18:43:55.996888 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:55Z","lastTransitionTime":"2026-01-27T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.099437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.099489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.099503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.099522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.099535 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.117063 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:45:04.134195693 +0000 UTC Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.202356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.202403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.202414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.202432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.202443 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.305246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.305300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.305314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.305877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.305896 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.408629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.408683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.408697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.408726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.408745 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.511060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.511185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.511214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.511245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.511268 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.613685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.613729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.613739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.613754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.613765 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.716528 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.716605 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.716628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.716659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.716683 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.819000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.819058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.819068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.819096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.819106 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.921414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.921452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.921462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.921474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:56 crc kubenswrapper[4853]: I0127 18:43:56.921483 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:56Z","lastTransitionTime":"2026-01-27T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.024133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.024177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.024187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.024204 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.024216 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.111701 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.111801 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:57 crc kubenswrapper[4853]: E0127 18:43:57.111871 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.111889 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.111911 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:57 crc kubenswrapper[4853]: E0127 18:43:57.112024 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:57 crc kubenswrapper[4853]: E0127 18:43:57.112213 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:57 crc kubenswrapper[4853]: E0127 18:43:57.112304 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.118165 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:41:34.531421367 +0000 UTC Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.128684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.128749 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.128770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.128794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.128806 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.231965 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.232005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.232016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.232033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.232044 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.334462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.334526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.334541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.334557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.334569 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.437420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.437466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.437477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.437495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.437507 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.539596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.539667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.539680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.539698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.539713 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.642477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.642532 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.642543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.642558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.642570 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.745022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.745073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.745086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.745104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.745136 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.847180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.847212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.847223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.847237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.847247 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.949690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.949722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.949730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.949742 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:57 crc kubenswrapper[4853]: I0127 18:43:57.949752 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:57Z","lastTransitionTime":"2026-01-27T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.054607 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.054705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.054730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.054770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.054800 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.120317 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:46:43.74099082 +0000 UTC Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.128509 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.139748 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.149548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.159531 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.160300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.160353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.160376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.160411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.160432 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.178358 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.189257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.201347 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.213392 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.230877 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.243755 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.261799 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.264748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.264787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.264822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.264842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.264857 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.276350 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.290394 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"2026-01-27T18:43:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e\\\\n2026-01-27T18:43:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e to /host/opt/cni/bin/\\\\n2026-01-27T18:43:05Z [verbose] multus-daemon started\\\\n2026-01-27T18:43:05Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:43:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.304090 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.316573 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.334415 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.347715 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.356897 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.367565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.367628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.367642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.367665 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.367682 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.372039 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:58Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.470975 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.471054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.471073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.471103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.471168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.574603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.574676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.574694 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.574726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.574749 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.678674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.678761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.678784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.678814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.678841 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.782193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.782228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.782240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.782257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.782269 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.885213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.885250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.885264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.885308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.885321 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.988659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.988734 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.988755 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.988776 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:58 crc kubenswrapper[4853]: I0127 18:43:58.988792 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:58Z","lastTransitionTime":"2026-01-27T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.090914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.090953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.090963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.090979 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.090990 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.112345 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.112377 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.112419 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.112350 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:59 crc kubenswrapper[4853]: E0127 18:43:59.112438 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:59 crc kubenswrapper[4853]: E0127 18:43:59.112489 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:43:59 crc kubenswrapper[4853]: E0127 18:43:59.112590 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:59 crc kubenswrapper[4853]: E0127 18:43:59.112861 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.113063 4853 scope.go:117] "RemoveContainer" containerID="9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.120974 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:49:53.364376422 +0000 UTC Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.193407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.193445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.193456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.193471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.193482 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.296012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.296047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.296055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.296073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.296084 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.398166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.398224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.398237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.398266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.398279 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.501658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.501753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.501773 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.501810 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.501835 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.604651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.604743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.604766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.604796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.604816 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.605110 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/2.log" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.612012 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.612896 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.639727 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.675794 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.688871 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.708019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.708074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.708090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.708113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.708154 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.708651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.735758 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.763485 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.782153 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.796326 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.807527 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.809974 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.810018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.810028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.810042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.810051 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.824466 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.833867 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.842431 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.863181 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.874096 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.885754 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.897613 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.909857 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.912928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.912967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.912981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.913000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.913012 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:59Z","lastTransitionTime":"2026-01-27T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.924654 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"2026-01-27T18:43:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e\\\\n2026-01-27T18:43:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e to /host/opt/cni/bin/\\\\n2026-01-27T18:43:05Z [verbose] multus-daemon started\\\\n2026-01-27T18:43:05Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:43:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:59 crc kubenswrapper[4853]: I0127 18:43:59.936609 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.015414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.015464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.015476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.015493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.015504 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.117017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.117054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.117063 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.117074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.117084 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.121101 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:46:36.521614665 +0000 UTC Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.219730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.219830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.219848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.219876 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.219893 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.322565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.322610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.322622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.322640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.322655 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.425844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.425895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.425906 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.425924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.425936 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.528474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.528542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.528562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.528590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.528608 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.619616 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/3.log" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.620577 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/2.log" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.623929 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" exitCode=1 Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.623978 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.624018 4853 scope.go:117] "RemoveContainer" containerID="9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.625209 4853 scope.go:117] "RemoveContainer" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" Jan 27 18:44:00 crc kubenswrapper[4853]: E0127 18:44:00.625516 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.632885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.632943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.632958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.632985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.633005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.641318 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.655231 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.669819 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.682308 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.695144 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.714184 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.726476 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"2026-01-27T18:43:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e\\\\n2026-01-27T18:43:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e to /host/opt/cni/bin/\\\\n2026-01-27T18:43:05Z [verbose] multus-daemon started\\\\n2026-01-27T18:43:05Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:43:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.736720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.736744 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.736783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.736941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.736962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.736973 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.755555 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.777982 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.788150 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.796820 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.807812 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.819720 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.829426 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.839323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.839375 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.839392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.839416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.839435 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.841265 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.852437 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.881161 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef638d607e6dd3da7728f611dbadcc220fbecf3e6d8c85d5911cfd1ebe2cac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:28Z\\\",\\\"message\\\":\\\"work controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:28Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:43:28.981598 6566 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:43:28.981636 6566 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:44:00Z\\\",\\\"message\\\":\\\" 7015 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:44:00.122649 7015 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc0078b05ed 0xc0078b05ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.896197 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.942946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.942994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.943003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.943019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:00 crc kubenswrapper[4853]: I0127 18:44:00.943032 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:00Z","lastTransitionTime":"2026-01-27T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.044795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.044832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.044844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.044859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.044870 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.063690 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.063770 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.063813 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.063832 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.06381051 +0000 UTC m=+147.526353393 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.063913 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.063910 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.063959 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.063950434 +0000 UTC m=+147.526493317 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.063983 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.063965055 +0000 UTC m=+147.526507978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.111674 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.111715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.111827 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.111888 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.112088 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.112090 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.112163 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.112338 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.121528 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:13:46.267738367 +0000 UTC Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.147939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.147995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.148010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.148032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.148041 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.164943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.165082 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165344 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165400 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165422 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165430 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165522 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165556 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165531 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.165498716 +0000 UTC m=+147.628041629 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.165671 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.16564013 +0000 UTC m=+147.628183013 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.252798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.252850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.252861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.252881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.252892 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.355224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.355263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.355271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.355284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.355292 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.458744 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.458862 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.458889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.458929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.458953 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.561719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.561792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.561811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.561837 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.561856 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.634417 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/3.log" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.637466 4853 scope.go:117] "RemoveContainer" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" Jan 27 18:44:01 crc kubenswrapper[4853]: E0127 18:44:01.637592 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.654275 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa5b5cf-a6bd-4726-aa83-4ae7fa257dd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34d7b284bf0d4da5b618f3afc8546d8de1c57118035eca06d1d8d53afd59503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ef6ab0f3537b63366418829fb851bf2b21df5c3509f1e6ea61a3ba0530f537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ab7681c5d4c9e9e1e003ecff21e3a39e40164693ef6b8fcdded71650dcff4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9f4494451f75c64fbdca006455d3ce09b14f45939b855d782629e25af517ed0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.663947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.663990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.664006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.664031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.664047 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.666255 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l59xt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e82bc6-1fab-4815-a64e-2ebbf8b72315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d6f7b775ee615931d713c3fcf51828673cd8414a08c46c04eb38abd37c58d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnhhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l59xt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.680821 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ght98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce95829c-f3fb-493c-bf9a-a3515fe6ddac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://918b69aa50072e0227c96f268fe68b5dfc90acf2b8b93b7fdee73695fc6cbab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a6c7c9dd5d82c1b1f5f2d323a2f530c6b2e18362eedbbe74cc8c570732331f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e8c337bda8a6b6c0848db7c95140385d545fc7dd729f002d21c248e2bbf881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f0216b6ea3819db267200bb464f15f2f7e4de1c65df0871d6f7e70ecb54839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://205b9e4e915af8e2031c48180f4d56d6075e94f8a8ff8b40fe6b46229fb1aa03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c64d2f527e9e9bd0975df15cbeca44c11d215f2591052b030b6afa87600576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c8f991246bd43ecaff0d08c9fb12a3347d52de48db0ea8f8a674bbdc9945c13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttzfq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ght98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.699371 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccf4b23-863a-490c-ba35-1b03d360e200\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 18:42:57.781245 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 18:42:57.782660 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:57.784271 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3951302993/tls.crt::/tmp/serving-cert-3951302993/tls.key\\\\\\\"\\\\nI0127 18:42:58.344426 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:58.346603 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:58.346626 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:58.346650 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:58.346655 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:58.351830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:58.351854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351860 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:58.351866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:58.351870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:58.351874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 18:42:58.351873 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 18:42:58.351879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 18:42:58.354790 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.719520 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b9c54a-360e-431b-b35a-eea549616487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e57571b0b75d3ac6eaa28ddbc474ddf47f632c158372a965dfe4c30f54d8a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dcf34028a2a1a0d41f57ebced90e711bfa6d0f199a2b12bc2bcd6d0642d10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e9331353060208631cfca86d748339935f8581d50a234f2864eaffdd98ff39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d4896cb23e4a91356e7fabb5dfcf05e46bf7e8c34b6bd73d63a16422d015ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02f84ba42baba137145a54bfa742b12b0f94aaf41801347eb8a803a6acd1b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ab34ce94a12a65d8b187ce438f27e37d4b45e0b9ed080c79d6e4633e777658\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972c76164f33d83bd6d211da2b4ecec08c5d88dd4cd474c5d5a8122649ea1cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9435046e30c8cda89d2ab6f312406bc2cc6db0c03ae6cdea1eb5acf8a804df1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.730108 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2hzcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c088d143-dd9c-4c77-b9a3-3a0113306f41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1b85e99c8381dbf8e86fb211770c49ebd11497f15c73254d6482b45d507654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zqtv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2hzcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.743179 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a89b1e-bef8-4cb7-930c-480d3125778c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d886a21d3eb5af71f0dc43a77a71a2d80c11b89013b8b77dd9e680bed9c09a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbhn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6gqj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.779428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.779474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.779489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.779510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.779525 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.786636 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbc7598-422a-43ad-ae98-88e57ec80b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:44:00Z\\\",\\\"message\\\":\\\" 7015 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:00Z is after 2025-08-24T17:21:41Z]\\\\nI0127 18:44:00.122649 7015 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc0078b05ed 0xc0078b05ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq4vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hdtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.815035 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29407244-fbfe-4d37-a33e-7d59df1c22fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wdzg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.834776 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f980f1d965582a58bd18d993850b5bd5f21849c1d1c0275c81fdbb25d549e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.846044 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a0b2fe5dbcc81055c001720c8ab69b7bd1989fe60d9ef4063c96c53a3f4536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.856601 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.864545 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.873921 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.881252 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.881285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.881298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.881312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.881323 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.889556 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d5a0e2cb909a09c6325d43fed80f9ac231f17c4904fda6da22031d974a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fba84b5abd317487efa9758191c6d4e0e878809711ccecdd2f98c2c7659c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.901423 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w4d5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2c07de-2ac9-4074-9fb0-519cfaf37f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:50Z\\\",\\\"message\\\":\\\"2026-01-27T18:43:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e\\\\n2026-01-27T18:43:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d2dd683b-493b-441e-b17e-5e422275f69e to /host/opt/cni/bin/\\\\n2026-01-27T18:43:05Z [verbose] multus-daemon started\\\\n2026-01-27T18:43:05Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:43:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jkvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w4d5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.910967 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e56db9-b380-4e21-9810-2dfa1517d5ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94b4164fce297bdb91f8d062c22463931e45e9194e17e4102f568e6f04c08680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec4d041ed140516bb311297a6618188794b4950b0199a05f3a028215b75b2dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbc8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:43:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7x9tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.922912 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.934167 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.983357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.983402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.983414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.983430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:01 crc kubenswrapper[4853]: I0127 18:44:01.983440 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:01Z","lastTransitionTime":"2026-01-27T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.085963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.086005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.086015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.086032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.086044 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.122611 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:49:39.571194037 +0000 UTC Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.188998 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.189040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.189051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.189069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.189078 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.292060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.292100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.292108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.292151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.292161 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.396015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.396088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.396112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.396203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.396230 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.499855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.499930 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.499953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.499993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.500017 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.603834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.603912 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.603932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.603963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.603983 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.707233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.707300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.707321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.707350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.707377 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.811506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.811613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.811655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.811692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.811716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.915310 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.915398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.915422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.915454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:02 crc kubenswrapper[4853]: I0127 18:44:02.915479 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:02Z","lastTransitionTime":"2026-01-27T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.019168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.019245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.019262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.019290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.019308 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.112483 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.112568 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.112646 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:03 crc kubenswrapper[4853]: E0127 18:44:03.112726 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.112820 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:03 crc kubenswrapper[4853]: E0127 18:44:03.112947 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:03 crc kubenswrapper[4853]: E0127 18:44:03.113254 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:03 crc kubenswrapper[4853]: E0127 18:44:03.113334 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.122796 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:20:54.080110852 +0000 UTC Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.122854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.122898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.122907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.122926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.122939 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.227320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.227377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.227388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.227410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.227425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.330055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.330093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.330106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.330164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.330178 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.434444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.434520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.434541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.434569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.434588 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.537580 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.537668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.537694 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.537730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.537755 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.641704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.641779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.641791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.641814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.641832 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.746325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.746412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.746438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.746471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.746494 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.850363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.850429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.850456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.850495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.850523 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.954308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.954365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.954376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.954392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:03 crc kubenswrapper[4853]: I0127 18:44:03.954406 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:03Z","lastTransitionTime":"2026-01-27T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.057475 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.057601 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.057622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.057664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.057687 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.123551 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:43:47.937764163 +0000 UTC Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.161306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.161382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.161395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.161434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.161451 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.264594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.264628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.264637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.264651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.264661 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.367288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.367328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.367338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.367352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.367362 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.470276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.470375 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.470385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.470411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.470423 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.573848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.573902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.573915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.573938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.573948 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.676900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.677044 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.677065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.677093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.677154 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.782412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.782494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.782518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.782546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.782568 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.886003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.886057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.886067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.886083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.886095 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.989372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.989424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.989436 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.989452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:04 crc kubenswrapper[4853]: I0127 18:44:04.989464 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:04Z","lastTransitionTime":"2026-01-27T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.092340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.092380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.092388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.092401 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.092412 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.112234 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.112248 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.112376 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.112460 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.112582 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.112763 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.112812 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.112900 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.124674 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:20:15.318390934 +0000 UTC Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.195192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.195230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.195240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.195253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.195263 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.298182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.298223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.298235 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.298251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.298264 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.352944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.353063 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.353088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.353154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.353181 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.369710 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.374851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.374913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.374927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.374948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.374961 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.396978 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.403807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.403853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.403863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.403885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.403898 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.423759 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.429420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.429475 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.429492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.429512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.429525 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.448750 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.452840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.452876 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.452888 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.452905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.452916 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.472062 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ff71d2-3e1e-470a-b646-0c487d7259d5\\\",\\\"systemUUID\\\":\\\"f3eb2985-2316-42c4-9a73-507610f5aaf9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:05Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:05 crc kubenswrapper[4853]: E0127 18:44:05.472243 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.474353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.474391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.474404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.474426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.474439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.577032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.577072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.577081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.577094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.577104 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.679877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.679927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.679940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.679962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.679974 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.783343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.783393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.783408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.783425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.783439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.885749 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.885789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.885799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.885816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.885827 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.987840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.987882 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.987891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.987905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:05 crc kubenswrapper[4853]: I0127 18:44:05.987915 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:05Z","lastTransitionTime":"2026-01-27T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.090688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.090726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.090736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.090750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.090759 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.125352 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:28:14.537897325 +0000 UTC Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.193232 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.193274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.193287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.193303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.193313 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.295360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.295433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.295448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.295473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.295485 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.397615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.397664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.397680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.397698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.397711 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.500417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.500499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.500527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.500552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.500569 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.603441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.603492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.603507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.603528 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.603545 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.706212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.706272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.706288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.706312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.706334 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.809505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.809557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.809572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.809594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.809609 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.911999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.912041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.912049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.912067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:06 crc kubenswrapper[4853]: I0127 18:44:06.912133 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:06Z","lastTransitionTime":"2026-01-27T18:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.015077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.015106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.015134 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.015158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.015168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.112257 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.112342 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:07 crc kubenswrapper[4853]: E0127 18:44:07.112381 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.112262 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:07 crc kubenswrapper[4853]: E0127 18:44:07.112478 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:07 crc kubenswrapper[4853]: E0127 18:44:07.112580 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.112761 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:07 crc kubenswrapper[4853]: E0127 18:44:07.112857 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.117899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.117955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.117976 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.117999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.118015 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.126043 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:13:27.248620728 +0000 UTC Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.220805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.220861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.220871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.220884 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.220892 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.323398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.323435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.323445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.323460 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.323471 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.426518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.426599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.426624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.426653 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.426672 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.529715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.529757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.529767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.529783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.529795 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.632578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.632639 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.632658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.632683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.632703 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.736088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.736199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.736218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.736242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.736259 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.840071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.840167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.840193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.840224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.840249 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.942703 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.942991 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.943176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.943459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:07 crc kubenswrapper[4853]: I0127 18:44:07.943716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:07Z","lastTransitionTime":"2026-01-27T18:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.046355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.046539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.046559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.046586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.046629 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.124265 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a645d521-3c59-41b6-92b0-e0d9cff0bab5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d1c368397595a923d917720ba80fdbcdd3700eaf983e6f50f1be14332fc13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e783bc12efaa8b16a12346ff490c56587678e9c57bc396046989f216d49373b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.126858 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:02:30.367811649 +0000 UTC Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.143832 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c77668e-2627-46e1-b7f5-a99455378d85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f38d96da9d500a022fd5b7fbe4019623d08fb9733ba3b1a5f2f107f1901a28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd606f868e97e5c0f90517a0de662da2512679e27b0ececbcaa02c9f7e79c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b210c08a6d15dd6d40daabfcc4cb94985e2a3957ffa501b32d5331f3d20f8908\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:44:08Z is after 2025-08-24T17:21:41Z" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.150537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.150764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.150904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.151022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.151164 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.192976 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7x9tl" podStartSLOduration=65.192959349 podStartE2EDuration="1m5.192959349s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.192022982 +0000 UTC m=+90.654565875" watchObservedRunningTime="2026-01-27 18:44:08.192959349 +0000 UTC m=+90.655502242" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.253689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.254018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.254209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.254354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.254472 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.257656 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-w4d5n" podStartSLOduration=65.257638696 podStartE2EDuration="1m5.257638696s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.257279786 +0000 UTC m=+90.719822679" watchObservedRunningTime="2026-01-27 18:44:08.257638696 +0000 UTC m=+90.720181589" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.275716 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ght98" podStartSLOduration=65.275700618 podStartE2EDuration="1m5.275700618s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.274678438 +0000 UTC m=+90.737221321" watchObservedRunningTime="2026-01-27 18:44:08.275700618 +0000 UTC m=+90.738243501" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.293439 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.293413999 podStartE2EDuration="1m9.293413999s" podCreationTimestamp="2026-01-27 18:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.292821732 +0000 UTC m=+90.755364625" watchObservedRunningTime="2026-01-27 18:44:08.293413999 +0000 UTC m=+90.755956892" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.330152 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.330107829 podStartE2EDuration="39.330107829s" podCreationTimestamp="2026-01-27 18:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.32981159 +0000 UTC m=+90.792354483" watchObservedRunningTime="2026-01-27 18:44:08.330107829 +0000 UTC m=+90.792650722" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.330413 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.330403707 podStartE2EDuration="1m11.330403707s" podCreationTimestamp="2026-01-27 18:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.318621507 +0000 UTC m=+90.781164390" watchObservedRunningTime="2026-01-27 18:44:08.330403707 +0000 UTC m=+90.792946640" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.340146 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l59xt" podStartSLOduration=65.340111547 podStartE2EDuration="1m5.340111547s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.339792318 +0000 UTC m=+90.802335241" watchObservedRunningTime="2026-01-27 18:44:08.340111547 +0000 UTC m=+90.802654430" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.357306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.357347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.357359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.357374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.357385 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.418068 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2hzcp" podStartSLOduration=65.418049978 podStartE2EDuration="1m5.418049978s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.416717809 +0000 UTC m=+90.879260692" watchObservedRunningTime="2026-01-27 18:44:08.418049978 +0000 UTC m=+90.880592871" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.428691 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podStartSLOduration=65.428679314 podStartE2EDuration="1m5.428679314s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:08.4285292 +0000 UTC m=+90.891072083" watchObservedRunningTime="2026-01-27 18:44:08.428679314 +0000 UTC m=+90.891222217" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.459631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.459673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.459681 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.459697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.459705 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.562503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.562558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.562572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.562590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.562604 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.663884 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.663917 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.663935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.663950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.663961 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.766939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.767003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.767019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.767043 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.767062 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.869729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.869793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.869817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.869850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.869875 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.973036 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.973159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.973184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.973210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:08 crc kubenswrapper[4853]: I0127 18:44:08.973228 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:08Z","lastTransitionTime":"2026-01-27T18:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.076488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.076594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.076613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.076640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.076660 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.112633 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.112745 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.112670 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.112633 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:09 crc kubenswrapper[4853]: E0127 18:44:09.112923 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:09 crc kubenswrapper[4853]: E0127 18:44:09.113057 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:09 crc kubenswrapper[4853]: E0127 18:44:09.113204 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:09 crc kubenswrapper[4853]: E0127 18:44:09.113375 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.127826 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:46:33.980572212 +0000 UTC Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.179216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.179269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.179279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.179297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.179309 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.282554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.282610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.282621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.282644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.282655 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.385784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.385914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.385934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.385971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.385998 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.490420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.490467 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.490480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.490496 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.490508 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.593309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.593355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.593366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.593387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.593404 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.695710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.695777 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.695789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.695803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.695813 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.798347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.798387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.798398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.798412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.798422 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.900521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.900780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.900850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.900925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:09 crc kubenswrapper[4853]: I0127 18:44:09.900989 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:09Z","lastTransitionTime":"2026-01-27T18:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.003191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.003231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.003240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.003254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.003263 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.105406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.105441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.105451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.105465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.105474 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.128746 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:30:30.018530812 +0000 UTC Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.207873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.208162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.208271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.208369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.208457 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.311167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.311225 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.311242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.311261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.311276 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.413463 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.413497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.413507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.413522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.413532 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.516091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.516468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.516614 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.516761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.516891 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.619470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.619542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.619566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.619594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.619620 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.722284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.722321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.722332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.722346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.722357 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.824780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.824825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.824842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.824864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.824876 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.939860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.939897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.939906 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.939919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:10 crc kubenswrapper[4853]: I0127 18:44:10.939929 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:10Z","lastTransitionTime":"2026-01-27T18:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.042981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.043045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.043061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.043085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.043102 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.111999 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.112191 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:11 crc kubenswrapper[4853]: E0127 18:44:11.112340 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.112392 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.112442 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:11 crc kubenswrapper[4853]: E0127 18:44:11.112596 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:11 crc kubenswrapper[4853]: E0127 18:44:11.112643 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:11 crc kubenswrapper[4853]: E0127 18:44:11.112818 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.129239 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:21:51.805497651 +0000 UTC Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.146850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.146908 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.146926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.146949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.146966 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.250381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.250438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.250455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.250480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.250498 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.354191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.354254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.354271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.354297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.354325 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.457097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.457157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.457170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.457187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.457197 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.559402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.559437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.559446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.559458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.559466 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.661931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.661976 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.661985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.662000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.662010 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.764611 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.764663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.764678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.764695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.764707 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.867077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.867154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.867181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.867203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.867218 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.969791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.969826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.969834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.969849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:11 crc kubenswrapper[4853]: I0127 18:44:11.969859 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:11Z","lastTransitionTime":"2026-01-27T18:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.072209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.072262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.072278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.072299 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.072313 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.130045 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:41:00.629129407 +0000 UTC Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.175157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.175195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.175205 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.175219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.175229 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.277159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.277200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.277216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.277233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.277245 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.380139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.380191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.380200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.380214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.380223 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.483682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.483753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.483767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.483785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.483796 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.587014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.587064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.587075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.587093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.587106 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.689251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.689331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.689397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.689461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.689485 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.792411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.792511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.792561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.792593 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.792611 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.895422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.895499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.895524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.895553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.895575 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.998599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.998646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.998660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.998677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:12 crc kubenswrapper[4853]: I0127 18:44:12.998689 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:12Z","lastTransitionTime":"2026-01-27T18:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.101572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.101617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.101628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.101644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.101655 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.112353 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.112405 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.112491 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.112519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:13 crc kubenswrapper[4853]: E0127 18:44:13.112718 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:13 crc kubenswrapper[4853]: E0127 18:44:13.112859 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:13 crc kubenswrapper[4853]: E0127 18:44:13.112966 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:13 crc kubenswrapper[4853]: E0127 18:44:13.113197 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.113963 4853 scope.go:117] "RemoveContainer" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" Jan 27 18:44:13 crc kubenswrapper[4853]: E0127 18:44:13.114164 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.131164 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:36:24.924128271 +0000 UTC Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.204661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.204758 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.204776 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.204801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.204820 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.308023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.308087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.308106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.308162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.308181 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.410939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.411043 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.411064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.411093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.411109 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.513939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.514307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.514317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.514333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.514346 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.617448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.617494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.617504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.617520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.617532 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.721332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.721403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.721423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.721450 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.721468 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.823459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.823576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.823603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.823633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.823654 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.926076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.926186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.926202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.926220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:13 crc kubenswrapper[4853]: I0127 18:44:13.926232 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:13Z","lastTransitionTime":"2026-01-27T18:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.028235 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.028281 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.028294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.028313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.028325 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.130371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.130415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.130426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.130440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.130451 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.131468 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:15:23.816826759 +0000 UTC Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.232779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.232828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.232843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.232861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.232873 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.335564 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.335616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.335627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.335644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.335654 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.437617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.437665 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.437678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.437693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.437703 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.540138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.540180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.540190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.540220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.540230 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.642756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.642783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.642791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.642802 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.642811 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.745062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.745096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.745104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.745134 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.745146 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.847003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.847048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.847058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.847073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.847084 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.949774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.949822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.949834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.949850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:14 crc kubenswrapper[4853]: I0127 18:44:14.949862 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:14Z","lastTransitionTime":"2026-01-27T18:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.053088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.053151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.053162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.053178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.053188 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.111640 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.111773 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.111832 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:15 crc kubenswrapper[4853]: E0127 18:44:15.112107 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.112209 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:15 crc kubenswrapper[4853]: E0127 18:44:15.112410 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:15 crc kubenswrapper[4853]: E0127 18:44:15.112529 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:15 crc kubenswrapper[4853]: E0127 18:44:15.112729 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.132069 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:19:40.869574919 +0000 UTC Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.155983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.156268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.156434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.156549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.156626 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.259566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.259613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.259624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.259647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.259660 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.362555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.362959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.363169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.363340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.363500 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.467689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.467761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.467782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.467809 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.467842 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.574220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.574301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.574322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.574346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.574367 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.588260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.588309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.588321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.588339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.588352 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:44:15Z","lastTransitionTime":"2026-01-27T18:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.632407 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh"] Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.632772 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.634914 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.635276 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.635379 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.635541 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.650177 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.650157187 podStartE2EDuration="28.650157187s" podCreationTimestamp="2026-01-27 18:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:15.649486408 +0000 UTC m=+98.112029291" watchObservedRunningTime="2026-01-27 18:44:15.650157187 +0000 UTC m=+98.112700090" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.672053 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.672035059 podStartE2EDuration="1m18.672035059s" podCreationTimestamp="2026-01-27 18:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:15.663158892 +0000 UTC m=+98.125701785" watchObservedRunningTime="2026-01-27 18:44:15.672035059 +0000 UTC m=+98.134577942" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.736688 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84083597-0adb-4d2f-9ea6-97d5fded1944-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.736725 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84083597-0adb-4d2f-9ea6-97d5fded1944-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.736747 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84083597-0adb-4d2f-9ea6-97d5fded1944-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.736765 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84083597-0adb-4d2f-9ea6-97d5fded1944-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.736840 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84083597-0adb-4d2f-9ea6-97d5fded1944-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.838771 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84083597-0adb-4d2f-9ea6-97d5fded1944-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.839261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84083597-0adb-4d2f-9ea6-97d5fded1944-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.839427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84083597-0adb-4d2f-9ea6-97d5fded1944-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.839547 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84083597-0adb-4d2f-9ea6-97d5fded1944-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.839016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84083597-0adb-4d2f-9ea6-97d5fded1944-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.839656 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84083597-0adb-4d2f-9ea6-97d5fded1944-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.839902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84083597-0adb-4d2f-9ea6-97d5fded1944-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.840856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84083597-0adb-4d2f-9ea6-97d5fded1944-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.847938 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84083597-0adb-4d2f-9ea6-97d5fded1944-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.862927 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84083597-0adb-4d2f-9ea6-97d5fded1944-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5pphh\" (UID: \"84083597-0adb-4d2f-9ea6-97d5fded1944\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:15 crc kubenswrapper[4853]: I0127 18:44:15.947426 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" Jan 27 18:44:16 crc kubenswrapper[4853]: I0127 18:44:16.132925 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:23:59.9882151 +0000 UTC Jan 27 18:44:16 crc kubenswrapper[4853]: I0127 18:44:16.133032 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 18:44:16 crc kubenswrapper[4853]: I0127 18:44:16.145295 4853 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:44:16 crc kubenswrapper[4853]: I0127 18:44:16.696061 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" event={"ID":"84083597-0adb-4d2f-9ea6-97d5fded1944","Type":"ContainerStarted","Data":"e537d530c7480e6ab5ef46204d5b3c89f8b0cdfb6dacbcb386c2b23be0394632"} Jan 27 18:44:16 crc kubenswrapper[4853]: I0127 18:44:16.696112 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" event={"ID":"84083597-0adb-4d2f-9ea6-97d5fded1944","Type":"ContainerStarted","Data":"73a189c8bd49d81dde7bbe523bc5fd1ccf1a4c642bac349c5ad30f354d530fa7"} Jan 27 18:44:16 crc kubenswrapper[4853]: I0127 18:44:16.712328 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5pphh" podStartSLOduration=73.712299813 podStartE2EDuration="1m13.712299813s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:16.709503652 +0000 UTC m=+99.172046535" watchObservedRunningTime="2026-01-27 18:44:16.712299813 +0000 UTC m=+99.174842736" Jan 27 18:44:17 crc kubenswrapper[4853]: I0127 18:44:17.112303 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:17 crc kubenswrapper[4853]: E0127 18:44:17.112719 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:17 crc kubenswrapper[4853]: I0127 18:44:17.112302 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:17 crc kubenswrapper[4853]: E0127 18:44:17.112816 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:17 crc kubenswrapper[4853]: I0127 18:44:17.112303 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:17 crc kubenswrapper[4853]: I0127 18:44:17.112311 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:17 crc kubenswrapper[4853]: E0127 18:44:17.112873 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:17 crc kubenswrapper[4853]: E0127 18:44:17.113001 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:19 crc kubenswrapper[4853]: I0127 18:44:19.111539 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:19 crc kubenswrapper[4853]: I0127 18:44:19.111675 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:19 crc kubenswrapper[4853]: I0127 18:44:19.112748 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:19 crc kubenswrapper[4853]: E0127 18:44:19.112870 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:19 crc kubenswrapper[4853]: E0127 18:44:19.113029 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:19 crc kubenswrapper[4853]: I0127 18:44:19.112957 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:19 crc kubenswrapper[4853]: E0127 18:44:19.113171 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:19 crc kubenswrapper[4853]: E0127 18:44:19.113341 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:21 crc kubenswrapper[4853]: I0127 18:44:21.112204 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:21 crc kubenswrapper[4853]: I0127 18:44:21.112302 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:21 crc kubenswrapper[4853]: E0127 18:44:21.112355 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:21 crc kubenswrapper[4853]: I0127 18:44:21.112388 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:21 crc kubenswrapper[4853]: I0127 18:44:21.112432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:21 crc kubenswrapper[4853]: E0127 18:44:21.112495 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:21 crc kubenswrapper[4853]: E0127 18:44:21.112616 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:21 crc kubenswrapper[4853]: E0127 18:44:21.112701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:21 crc kubenswrapper[4853]: I0127 18:44:21.509745 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:21 crc kubenswrapper[4853]: E0127 18:44:21.510029 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:44:21 crc kubenswrapper[4853]: E0127 18:44:21.510183 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs podName:29407244-fbfe-4d37-a33e-7d59df1c22fd nodeName:}" failed. No retries permitted until 2026-01-27 18:45:25.510154683 +0000 UTC m=+167.972697596 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs") pod "network-metrics-daemon-wdzg4" (UID: "29407244-fbfe-4d37-a33e-7d59df1c22fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:44:23 crc kubenswrapper[4853]: I0127 18:44:23.111967 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:23 crc kubenswrapper[4853]: I0127 18:44:23.112019 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:23 crc kubenswrapper[4853]: I0127 18:44:23.112054 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:23 crc kubenswrapper[4853]: E0127 18:44:23.112093 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:23 crc kubenswrapper[4853]: E0127 18:44:23.112192 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:23 crc kubenswrapper[4853]: E0127 18:44:23.112271 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:23 crc kubenswrapper[4853]: I0127 18:44:23.112109 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:23 crc kubenswrapper[4853]: E0127 18:44:23.112365 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:25 crc kubenswrapper[4853]: I0127 18:44:25.111906 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:25 crc kubenswrapper[4853]: I0127 18:44:25.111906 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:25 crc kubenswrapper[4853]: E0127 18:44:25.112788 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:25 crc kubenswrapper[4853]: I0127 18:44:25.111966 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:25 crc kubenswrapper[4853]: E0127 18:44:25.112923 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:25 crc kubenswrapper[4853]: I0127 18:44:25.111929 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:25 crc kubenswrapper[4853]: E0127 18:44:25.113100 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:25 crc kubenswrapper[4853]: E0127 18:44:25.113222 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:27 crc kubenswrapper[4853]: I0127 18:44:27.112354 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:27 crc kubenswrapper[4853]: I0127 18:44:27.112423 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:27 crc kubenswrapper[4853]: I0127 18:44:27.112493 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:27 crc kubenswrapper[4853]: E0127 18:44:27.112601 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:27 crc kubenswrapper[4853]: I0127 18:44:27.112703 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:27 crc kubenswrapper[4853]: E0127 18:44:27.112751 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:27 crc kubenswrapper[4853]: E0127 18:44:27.112863 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:27 crc kubenswrapper[4853]: E0127 18:44:27.113424 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:27 crc kubenswrapper[4853]: I0127 18:44:27.113987 4853 scope.go:117] "RemoveContainer" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" Jan 27 18:44:27 crc kubenswrapper[4853]: E0127 18:44:27.114262 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hdtbk_openshift-ovn-kubernetes(ebbc7598-422a-43ad-ae98-88e57ec80b9c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" Jan 27 18:44:29 crc kubenswrapper[4853]: I0127 18:44:29.112029 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:29 crc kubenswrapper[4853]: I0127 18:44:29.112064 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:29 crc kubenswrapper[4853]: I0127 18:44:29.112035 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:29 crc kubenswrapper[4853]: I0127 18:44:29.112095 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:29 crc kubenswrapper[4853]: E0127 18:44:29.112206 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:29 crc kubenswrapper[4853]: E0127 18:44:29.112288 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:29 crc kubenswrapper[4853]: E0127 18:44:29.112330 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:29 crc kubenswrapper[4853]: E0127 18:44:29.112380 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:31 crc kubenswrapper[4853]: I0127 18:44:31.111453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:31 crc kubenswrapper[4853]: I0127 18:44:31.111527 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:31 crc kubenswrapper[4853]: E0127 18:44:31.111568 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:31 crc kubenswrapper[4853]: I0127 18:44:31.111470 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:31 crc kubenswrapper[4853]: I0127 18:44:31.111616 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:31 crc kubenswrapper[4853]: E0127 18:44:31.111617 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:31 crc kubenswrapper[4853]: E0127 18:44:31.111675 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:31 crc kubenswrapper[4853]: E0127 18:44:31.111720 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:33 crc kubenswrapper[4853]: I0127 18:44:33.111733 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:33 crc kubenswrapper[4853]: E0127 18:44:33.111849 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:33 crc kubenswrapper[4853]: I0127 18:44:33.111744 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:33 crc kubenswrapper[4853]: I0127 18:44:33.111745 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:33 crc kubenswrapper[4853]: I0127 18:44:33.111962 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:33 crc kubenswrapper[4853]: E0127 18:44:33.111925 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:33 crc kubenswrapper[4853]: E0127 18:44:33.112040 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:33 crc kubenswrapper[4853]: E0127 18:44:33.112114 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:35 crc kubenswrapper[4853]: I0127 18:44:35.112008 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:35 crc kubenswrapper[4853]: I0127 18:44:35.112092 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:35 crc kubenswrapper[4853]: I0127 18:44:35.112197 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:35 crc kubenswrapper[4853]: I0127 18:44:35.112092 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:35 crc kubenswrapper[4853]: E0127 18:44:35.112329 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:35 crc kubenswrapper[4853]: E0127 18:44:35.112509 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:35 crc kubenswrapper[4853]: E0127 18:44:35.112625 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:35 crc kubenswrapper[4853]: E0127 18:44:35.112774 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:36 crc kubenswrapper[4853]: I0127 18:44:36.758181 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/1.log" Jan 27 18:44:36 crc kubenswrapper[4853]: I0127 18:44:36.758856 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/0.log" Jan 27 18:44:36 crc kubenswrapper[4853]: I0127 18:44:36.758892 4853 generic.go:334] "Generic (PLEG): container finished" podID="dd2c07de-2ac9-4074-9fb0-519cfaf37f69" containerID="d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c" exitCode=1 Jan 27 18:44:36 crc kubenswrapper[4853]: I0127 18:44:36.758919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerDied","Data":"d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c"} Jan 27 18:44:36 crc kubenswrapper[4853]: I0127 18:44:36.758950 4853 scope.go:117] "RemoveContainer" containerID="9662d5a5a46a3043eeddbb21a4c7da0545beb87a0da7da3e96dfe3b3cf803430" Jan 27 18:44:36 crc kubenswrapper[4853]: I0127 18:44:36.759339 4853 scope.go:117] "RemoveContainer" containerID="d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c" Jan 27 18:44:36 crc kubenswrapper[4853]: E0127 18:44:36.759477 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-w4d5n_openshift-multus(dd2c07de-2ac9-4074-9fb0-519cfaf37f69)\"" pod="openshift-multus/multus-w4d5n" podUID="dd2c07de-2ac9-4074-9fb0-519cfaf37f69" Jan 27 18:44:37 crc kubenswrapper[4853]: I0127 18:44:37.111609 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:37 crc kubenswrapper[4853]: I0127 18:44:37.111650 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:37 crc kubenswrapper[4853]: E0127 18:44:37.111781 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:37 crc kubenswrapper[4853]: I0127 18:44:37.111883 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:37 crc kubenswrapper[4853]: I0127 18:44:37.111882 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:37 crc kubenswrapper[4853]: E0127 18:44:37.112056 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:37 crc kubenswrapper[4853]: E0127 18:44:37.112225 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:37 crc kubenswrapper[4853]: E0127 18:44:37.112515 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:37 crc kubenswrapper[4853]: I0127 18:44:37.763513 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/1.log" Jan 27 18:44:38 crc kubenswrapper[4853]: E0127 18:44:38.085058 4853 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 18:44:38 crc kubenswrapper[4853]: E0127 18:44:38.203050 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:44:39 crc kubenswrapper[4853]: I0127 18:44:39.111881 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:39 crc kubenswrapper[4853]: I0127 18:44:39.111881 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:39 crc kubenswrapper[4853]: I0127 18:44:39.111905 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:39 crc kubenswrapper[4853]: I0127 18:44:39.112022 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:39 crc kubenswrapper[4853]: E0127 18:44:39.112102 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:39 crc kubenswrapper[4853]: E0127 18:44:39.112230 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:39 crc kubenswrapper[4853]: E0127 18:44:39.112331 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:39 crc kubenswrapper[4853]: E0127 18:44:39.112639 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.111792 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.111834 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.111938 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:41 crc kubenswrapper[4853]: E0127 18:44:41.111935 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:41 crc kubenswrapper[4853]: E0127 18:44:41.112171 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.112261 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:41 crc kubenswrapper[4853]: E0127 18:44:41.112751 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:41 crc kubenswrapper[4853]: E0127 18:44:41.112805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.113086 4853 scope.go:117] "RemoveContainer" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.776379 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/3.log" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.779384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerStarted","Data":"9953aae37a35dae2e23f03ff9b1849f9b1bdcf2f8d846e3acbdc93eff3d80a34"} Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.779975 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:44:41 crc kubenswrapper[4853]: I0127 18:44:41.816490 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podStartSLOduration=98.816474355 podStartE2EDuration="1m38.816474355s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:41.816330201 +0000 UTC m=+124.278873084" watchObservedRunningTime="2026-01-27 18:44:41.816474355 +0000 UTC m=+124.279017238" Jan 27 18:44:42 crc kubenswrapper[4853]: I0127 18:44:42.106083 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wdzg4"] Jan 27 18:44:42 crc kubenswrapper[4853]: I0127 18:44:42.106284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:42 crc kubenswrapper[4853]: E0127 18:44:42.106428 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:43 crc kubenswrapper[4853]: I0127 18:44:43.111399 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:43 crc kubenswrapper[4853]: E0127 18:44:43.111569 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:43 crc kubenswrapper[4853]: I0127 18:44:43.111725 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:43 crc kubenswrapper[4853]: E0127 18:44:43.111896 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:43 crc kubenswrapper[4853]: I0127 18:44:43.112289 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:43 crc kubenswrapper[4853]: E0127 18:44:43.112372 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:43 crc kubenswrapper[4853]: E0127 18:44:43.204182 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:44:44 crc kubenswrapper[4853]: I0127 18:44:44.111890 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:44 crc kubenswrapper[4853]: E0127 18:44:44.112020 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:45 crc kubenswrapper[4853]: I0127 18:44:45.111616 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:45 crc kubenswrapper[4853]: I0127 18:44:45.111879 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:45 crc kubenswrapper[4853]: I0127 18:44:45.112020 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:45 crc kubenswrapper[4853]: E0127 18:44:45.112275 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:45 crc kubenswrapper[4853]: E0127 18:44:45.112414 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:45 crc kubenswrapper[4853]: E0127 18:44:45.112884 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:46 crc kubenswrapper[4853]: I0127 18:44:46.111916 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:46 crc kubenswrapper[4853]: E0127 18:44:46.112055 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:47 crc kubenswrapper[4853]: I0127 18:44:47.112219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:47 crc kubenswrapper[4853]: I0127 18:44:47.112248 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:47 crc kubenswrapper[4853]: I0127 18:44:47.112219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:47 crc kubenswrapper[4853]: E0127 18:44:47.112348 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:47 crc kubenswrapper[4853]: E0127 18:44:47.112418 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:47 crc kubenswrapper[4853]: E0127 18:44:47.112487 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:48 crc kubenswrapper[4853]: I0127 18:44:48.114404 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:48 crc kubenswrapper[4853]: E0127 18:44:48.115711 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:48 crc kubenswrapper[4853]: E0127 18:44:48.204858 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:44:49 crc kubenswrapper[4853]: I0127 18:44:49.112034 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:49 crc kubenswrapper[4853]: I0127 18:44:49.112031 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:49 crc kubenswrapper[4853]: E0127 18:44:49.112166 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:49 crc kubenswrapper[4853]: I0127 18:44:49.112036 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:49 crc kubenswrapper[4853]: E0127 18:44:49.112376 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:49 crc kubenswrapper[4853]: E0127 18:44:49.112597 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:49 crc kubenswrapper[4853]: I0127 18:44:49.112815 4853 scope.go:117] "RemoveContainer" containerID="d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c" Jan 27 18:44:49 crc kubenswrapper[4853]: I0127 18:44:49.805388 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/1.log" Jan 27 18:44:49 crc kubenswrapper[4853]: I0127 18:44:49.805443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerStarted","Data":"40245ed681744116d224fbfe72f4989b1d9a86abb7c0b6ccbeb606b2d243672c"} Jan 27 18:44:50 crc kubenswrapper[4853]: I0127 18:44:50.111552 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:50 crc kubenswrapper[4853]: E0127 18:44:50.111682 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:51 crc kubenswrapper[4853]: I0127 18:44:51.111960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:51 crc kubenswrapper[4853]: I0127 18:44:51.112008 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:51 crc kubenswrapper[4853]: I0127 18:44:51.111960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:51 crc kubenswrapper[4853]: E0127 18:44:51.112096 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:51 crc kubenswrapper[4853]: E0127 18:44:51.112259 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:51 crc kubenswrapper[4853]: E0127 18:44:51.112306 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:52 crc kubenswrapper[4853]: I0127 18:44:52.112224 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:52 crc kubenswrapper[4853]: E0127 18:44:52.112419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wdzg4" podUID="29407244-fbfe-4d37-a33e-7d59df1c22fd" Jan 27 18:44:53 crc kubenswrapper[4853]: I0127 18:44:53.111982 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:53 crc kubenswrapper[4853]: I0127 18:44:53.112025 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:53 crc kubenswrapper[4853]: I0127 18:44:53.112175 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:53 crc kubenswrapper[4853]: E0127 18:44:53.112685 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:53 crc kubenswrapper[4853]: E0127 18:44:53.112474 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:53 crc kubenswrapper[4853]: E0127 18:44:53.112771 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:54 crc kubenswrapper[4853]: I0127 18:44:54.111818 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:44:54 crc kubenswrapper[4853]: I0127 18:44:54.114471 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:44:54 crc kubenswrapper[4853]: I0127 18:44:54.115572 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.111899 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.111997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.111997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.115395 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.115604 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.115631 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:44:55 crc kubenswrapper[4853]: I0127 18:44:55.116335 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.484031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.516761 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjcpp"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.517335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.520175 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.520555 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.521880 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.522053 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.522240 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.523090 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.523285 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.524100 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.524455 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.524553 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.524858 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.524975 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.524993 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.531598 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.532145 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.533409 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.533802 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.535367 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.538231 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.540908 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.541221 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.541367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.541513 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.541643 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.541786 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.541935 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.544600 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltskb"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.545266 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.546638 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.547282 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.549113 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.549893 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.552683 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.553111 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.553301 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.553468 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.553730 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.553826 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.554001 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.554147 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.554266 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.564332 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.564634 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.565042 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kmkjx"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.565280 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.568304 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.569167 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.569410 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.570060 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.570944 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9gqxt"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.580939 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.581140 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.581351 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.581735 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.581988 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.583175 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.584276 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8q4sj"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.584529 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.584644 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.584939 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.585008 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.589783 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.591905 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.592144 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.592429 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.592519 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.592590 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9vd4d"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.592766 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.592996 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593065 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593138 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593001 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593559 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593568 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593632 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593674 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593711 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593724 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593771 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.593978 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16214d7-8024-43ad-8394-ee95539c3093-config\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594423 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgwl\" (UniqueName: \"kubernetes.io/projected/b378b1b0-657f-420a-8666-86edfeb38a96-kube-api-access-wxgwl\") pod \"cluster-samples-operator-665b6dd947-r2vkh\" (UID: \"b378b1b0-657f-420a-8666-86edfeb38a96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594465 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594491 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtsj\" (UniqueName: \"kubernetes.io/projected/c16214d7-8024-43ad-8394-ee95539c3093-kube-api-access-xgtsj\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b378b1b0-657f-420a-8666-86edfeb38a96-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r2vkh\" (UID: \"b378b1b0-657f-420a-8666-86edfeb38a96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-serving-cert\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594574 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c16214d7-8024-43ad-8394-ee95539c3093-trusted-ca\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594606 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58200e7b-a0e9-47ba-8581-42878da87f40-serving-cert\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594627 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tjx\" (UniqueName: \"kubernetes.io/projected/58200e7b-a0e9-47ba-8581-42878da87f40-kube-api-access-t4tjx\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594649 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-client-ca\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594670 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c16214d7-8024-43ad-8394-ee95539c3093-serving-cert\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594700 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-config\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.594724 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8hv\" (UniqueName: \"kubernetes.io/projected/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-kube-api-access-7j8hv\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.596965 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.597172 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.597198 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tqzdv"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.597408 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.597583 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.597717 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.602400 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.603069 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.603776 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604142 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.603778 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604377 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604428 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-npp4j"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604557 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604379 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604692 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604710 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604821 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.604994 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.605086 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.605308 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.605440 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.605698 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.605848 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.606539 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xnnk5"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.606874 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.608668 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.608841 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.608986 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.611996 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.612820 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hscjm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.625029 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.633137 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.634039 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.634584 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n869z"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.635553 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.636039 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.636315 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.637307 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.642452 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.642508 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.659599 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.659923 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.660354 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.660546 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.660674 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.660769 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.660856 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.661013 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.661329 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.661373 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.661607 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.662592 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.662761 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.663214 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.665002 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.665058 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.665185 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.665785 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.666630 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.666822 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.667314 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.668033 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.668154 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.668417 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.668591 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.671918 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.673147 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.673690 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.674774 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.675292 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.676870 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.677210 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.677332 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.677622 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.680488 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ww5mj"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.680530 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.681295 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.681644 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.681831 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.688396 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hfgqg"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.688959 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.691470 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.692653 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kb26l"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.692789 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.693292 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.693649 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.693752 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.694111 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fhmft"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.694966 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.695191 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697199 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58200e7b-a0e9-47ba-8581-42878da87f40-serving-cert\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tjx\" (UniqueName: \"kubernetes.io/projected/58200e7b-a0e9-47ba-8581-42878da87f40-kube-api-access-t4tjx\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697255 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697273 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-config\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697308 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdtw\" (UniqueName: \"kubernetes.io/projected/6bd6880d-6581-4cca-8eb8-9acb80689e9e-kube-api-access-7zdtw\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697323 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6595832a-fc60-447b-826f-ba4eb83689fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697337 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61cb7436-6c15-48a6-a8b8-006f5a52f338-metrics-tls\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697355 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-client-ca\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697442 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c16214d7-8024-43ad-8394-ee95539c3093-serving-cert\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697478 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697531 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-config\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697559 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697610 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njpm9\" (UniqueName: \"kubernetes.io/projected/53cc9731-1ede-4ad3-b2e7-730e605a1a21-kube-api-access-njpm9\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697636 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-audit\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697681 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-etcd-client\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697707 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g64p\" (UniqueName: \"kubernetes.io/projected/be5a36ff-f665-4468-b7ae-8a443f0164e8-kube-api-access-6g64p\") pod \"downloads-7954f5f757-9gqxt\" (UID: \"be5a36ff-f665-4468-b7ae-8a443f0164e8\") " pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-config\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697805 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6595832a-fc60-447b-826f-ba4eb83689fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697853 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697883 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697931 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61cb7436-6c15-48a6-a8b8-006f5a52f338-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.697961 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5rk\" (UniqueName: \"kubernetes.io/projected/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-kube-api-access-4d5rk\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698005 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bd6880d-6581-4cca-8eb8-9acb80689e9e-node-pullsecrets\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698032 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8hv\" (UniqueName: \"kubernetes.io/projected/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-kube-api-access-7j8hv\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-config\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698068 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-client-ca\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698098 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f81e143-f570-4ea2-837d-f9a1dc205d9c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698148 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bxk4b"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698730 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjcpp"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698841 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.698149 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-etcd-serving-ca\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.699060 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f06b685a-8035-4bac-88d3-d092b6df21e4-audit-dir\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.699091 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.699815 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.700586 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.700841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-config\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.701946 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-service-ca\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.701989 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-trusted-ca-bundle\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702013 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0-metrics-tls\") pod \"dns-operator-744455d44c-xnnk5\" (UID: \"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702048 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16214d7-8024-43ad-8394-ee95539c3093-config\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702083 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgwl\" (UniqueName: \"kubernetes.io/projected/b378b1b0-657f-420a-8666-86edfeb38a96-kube-api-access-wxgwl\") pod \"cluster-samples-operator-665b6dd947-r2vkh\" (UID: \"b378b1b0-657f-420a-8666-86edfeb38a96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702101 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-oauth-serving-cert\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702141 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702160 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4e06c17-76a1-49b2-994b-bf53488b14a9-serving-cert\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702176 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57mf\" (UniqueName: \"kubernetes.io/projected/e4e06c17-76a1-49b2-994b-bf53488b14a9-kube-api-access-b57mf\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702195 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f81e143-f570-4ea2-837d-f9a1dc205d9c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702218 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgtsj\" (UniqueName: \"kubernetes.io/projected/c16214d7-8024-43ad-8394-ee95539c3093-kube-api-access-xgtsj\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702238 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-ca\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702254 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-serving-cert\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702269 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-encryption-config\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702295 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f81e143-f570-4ea2-837d-f9a1dc205d9c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702313 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-encryption-config\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b378b1b0-657f-420a-8666-86edfeb38a96-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r2vkh\" (UID: \"b378b1b0-657f-420a-8666-86edfeb38a96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702351 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-dir\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702988 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a656e98-1ed0-4b1a-8352-4038844a558a-serving-cert\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-config\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703029 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703716 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703765 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-service-ca\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703791 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-policies\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703808 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-oauth-config\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.702939 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16214d7-8024-43ad-8394-ee95539c3093-config\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703883 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-client\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-serving-cert\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703960 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c16214d7-8024-43ad-8394-ee95539c3093-trusted-ca\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc9731-1ede-4ad3-b2e7-730e605a1a21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.703993 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bd6880d-6581-4cca-8eb8-9acb80689e9e-audit-dir\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704012 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704032 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704048 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc9731-1ede-4ad3-b2e7-730e605a1a21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704063 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpnv\" (UniqueName: \"kubernetes.io/projected/2bd6e097-af15-41a1-9ab2-a4e79adef815-kube-api-access-bzpnv\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704079 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704107 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cb7436-6c15-48a6-a8b8-006f5a52f338-trusted-ca\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704140 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgg9c\" (UniqueName: \"kubernetes.io/projected/61cb7436-6c15-48a6-a8b8-006f5a52f338-kube-api-access-hgg9c\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704169 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnzb\" (UniqueName: \"kubernetes.io/projected/0a656e98-1ed0-4b1a-8352-4038844a558a-kube-api-access-vpnzb\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704190 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-image-import-ca\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704210 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spr6t\" (UniqueName: \"kubernetes.io/projected/6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0-kube-api-access-spr6t\") pod \"dns-operator-744455d44c-xnnk5\" (UID: \"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704234 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kds22\" (UniqueName: \"kubernetes.io/projected/f06b685a-8035-4bac-88d3-d092b6df21e4-kube-api-access-kds22\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-serving-cert\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704277 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-audit-policies\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704297 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwjc\" (UniqueName: \"kubernetes.io/projected/6595832a-fc60-447b-826f-ba4eb83689fb-kube-api-access-fdwjc\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704327 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-etcd-client\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.704347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-serving-cert\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.705681 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c16214d7-8024-43ad-8394-ee95539c3093-trusted-ca\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.707389 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c16214d7-8024-43ad-8394-ee95539c3093-serving-cert\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.707862 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rm987"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.708218 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b378b1b0-657f-420a-8666-86edfeb38a96-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r2vkh\" (UID: \"b378b1b0-657f-420a-8666-86edfeb38a96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.708553 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.709107 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58200e7b-a0e9-47ba-8581-42878da87f40-serving-cert\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.711774 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.713204 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-serving-cert\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.714589 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kmkjx"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.714612 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.714690 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.714958 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltskb"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.736319 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9vd4d"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.736609 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.737537 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6l6nh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.738004 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.743158 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.752083 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.752237 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lddqg"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.753058 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.753155 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8q4sj"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.753297 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.755338 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9gqxt"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.756067 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.757153 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.760627 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.761474 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.761939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tqzdv"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.763150 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-npp4j"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.764222 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.767185 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.768474 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hfgqg"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.769413 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hscjm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.770543 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cnjvr"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.771185 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.773177 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-52vxj"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.773891 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.773967 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.774424 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.775913 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.777349 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.779015 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bxk4b"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.779434 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.780360 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n869z"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.782007 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.783223 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xnnk5"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.784710 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.785625 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.786698 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.787876 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ww5mj"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.789134 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-52vxj"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.790420 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rm987"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.791912 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6l6nh"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.793450 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.794819 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lddqg"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.795919 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kb26l"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.797017 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.798226 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm"] Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-serving-cert\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805671 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-etcd-client\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805690 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-config\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdtw\" (UniqueName: \"kubernetes.io/projected/6bd6880d-6581-4cca-8eb8-9acb80689e9e-kube-api-access-7zdtw\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805748 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805763 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6595832a-fc60-447b-826f-ba4eb83689fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805778 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61cb7436-6c15-48a6-a8b8-006f5a52f338-metrics-tls\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805794 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805818 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njpm9\" (UniqueName: \"kubernetes.io/projected/53cc9731-1ede-4ad3-b2e7-730e605a1a21-kube-api-access-njpm9\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-audit\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805850 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805864 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-etcd-client\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g64p\" (UniqueName: \"kubernetes.io/projected/be5a36ff-f665-4468-b7ae-8a443f0164e8-kube-api-access-6g64p\") pod \"downloads-7954f5f757-9gqxt\" (UID: \"be5a36ff-f665-4468-b7ae-8a443f0164e8\") " pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805896 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-config\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6595832a-fc60-447b-826f-ba4eb83689fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805970 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.805985 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806001 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61cb7436-6c15-48a6-a8b8-006f5a52f338-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d5rk\" (UniqueName: \"kubernetes.io/projected/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-kube-api-access-4d5rk\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806051 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bd6880d-6581-4cca-8eb8-9acb80689e9e-node-pullsecrets\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-config\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806088 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f81e143-f570-4ea2-837d-f9a1dc205d9c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806102 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-etcd-serving-ca\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806131 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f06b685a-8035-4bac-88d3-d092b6df21e4-audit-dir\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806147 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806161 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-service-ca\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806174 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-trusted-ca-bundle\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0-metrics-tls\") pod \"dns-operator-744455d44c-xnnk5\" (UID: \"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806220 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-oauth-serving-cert\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806240 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4e06c17-76a1-49b2-994b-bf53488b14a9-serving-cert\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57mf\" (UniqueName: \"kubernetes.io/projected/e4e06c17-76a1-49b2-994b-bf53488b14a9-kube-api-access-b57mf\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806270 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f81e143-f570-4ea2-837d-f9a1dc205d9c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-encryption-config\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-ca\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806321 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-serving-cert\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-encryption-config\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806357 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f81e143-f570-4ea2-837d-f9a1dc205d9c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806374 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-dir\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806389 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a656e98-1ed0-4b1a-8352-4038844a558a-serving-cert\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806404 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-config\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806505 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-service-ca\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806521 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-policies\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806536 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-oauth-config\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806551 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-client\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806581 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806597 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806614 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bd6880d-6581-4cca-8eb8-9acb80689e9e-audit-dir\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.806898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f06b685a-8035-4bac-88d3-d092b6df21e4-audit-dir\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808204 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc9731-1ede-4ad3-b2e7-730e605a1a21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808231 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f81e143-f570-4ea2-837d-f9a1dc205d9c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc9731-1ede-4ad3-b2e7-730e605a1a21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808378 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808410 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpnv\" (UniqueName: \"kubernetes.io/projected/2bd6e097-af15-41a1-9ab2-a4e79adef815-kube-api-access-bzpnv\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808428 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808435 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808444 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808657 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cb7436-6c15-48a6-a8b8-006f5a52f338-trusted-ca\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgg9c\" (UniqueName: \"kubernetes.io/projected/61cb7436-6c15-48a6-a8b8-006f5a52f338-kube-api-access-hgg9c\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808748 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnzb\" (UniqueName: \"kubernetes.io/projected/0a656e98-1ed0-4b1a-8352-4038844a558a-kube-api-access-vpnzb\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808773 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-image-import-ca\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spr6t\" (UniqueName: \"kubernetes.io/projected/6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0-kube-api-access-spr6t\") pod \"dns-operator-744455d44c-xnnk5\" (UID: \"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808832 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kds22\" (UniqueName: \"kubernetes.io/projected/f06b685a-8035-4bac-88d3-d092b6df21e4-kube-api-access-kds22\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808862 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-serving-cert\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808887 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-audit-policies\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.808914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwjc\" (UniqueName: \"kubernetes.io/projected/6595832a-fc60-447b-826f-ba4eb83689fb-kube-api-access-fdwjc\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.809162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.810546 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-etcd-client\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.810576 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4e06c17-76a1-49b2-994b-bf53488b14a9-serving-cert\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.811299 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cb7436-6c15-48a6-a8b8-006f5a52f338-trusted-ca\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.811381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.811387 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.811510 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bd6880d-6581-4cca-8eb8-9acb80689e9e-node-pullsecrets\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.812038 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-config\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.812084 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-image-import-ca\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.812404 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-audit\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.812698 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.813042 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.813446 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6595832a-fc60-447b-826f-ba4eb83689fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.813945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-oauth-serving-cert\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.814285 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-serving-cert\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.814481 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f06b685a-8035-4bac-88d3-d092b6df21e4-encryption-config\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.814668 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-oauth-config\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.814804 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-dir\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.814814 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-serving-cert\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.814910 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61cb7436-6c15-48a6-a8b8-006f5a52f338-metrics-tls\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.815260 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-audit-policies\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.815310 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bd6880d-6581-4cca-8eb8-9acb80689e9e-audit-dir\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.815311 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f06b685a-8035-4bac-88d3-d092b6df21e4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.815838 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-config\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.815840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-service-ca\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.816190 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.816309 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-serving-cert\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.816846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-policies\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.816949 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.816994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-etcd-serving-ca\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.817431 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.817757 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f81e143-f570-4ea2-837d-f9a1dc205d9c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.817922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-trusted-ca-bundle\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.818644 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e06c17-76a1-49b2-994b-bf53488b14a9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.818902 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-config\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.818923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-etcd-client\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.819191 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.819421 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.819468 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.819429 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6bd6880d-6581-4cca-8eb8-9acb80689e9e-encryption-config\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.820964 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.821548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.821580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.822512 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6595832a-fc60-447b-826f-ba4eb83689fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.823674 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd6880d-6581-4cca-8eb8-9acb80689e9e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.839868 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.849053 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc9731-1ede-4ad3-b2e7-730e605a1a21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.860802 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.867503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-service-ca\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.879354 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.900880 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.919384 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.928653 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc9731-1ede-4ad3-b2e7-730e605a1a21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.940003 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.949261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a656e98-1ed0-4b1a-8352-4038844a558a-serving-cert\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.959736 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.980074 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:44:56 crc kubenswrapper[4853]: I0127 18:44:56.999198 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.009993 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-client\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.019797 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.039788 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.060486 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.070049 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-etcd-ca\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.079347 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.100097 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.119816 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.123168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a656e98-1ed0-4b1a-8352-4038844a558a-config\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.139920 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.159980 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.179918 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.186708 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0-metrics-tls\") pod \"dns-operator-744455d44c-xnnk5\" (UID: \"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.199802 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.239773 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.260435 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.280315 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.299570 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.339906 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.359685 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.380283 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.399578 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.420111 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.439624 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.459859 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.480082 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.499991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.519316 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.540316 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.559698 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.581005 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.599439 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.619631 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.639360 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.660157 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.679849 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.698042 4853 request.go:700] Waited for 1.008703033s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.700063 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.719642 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.739829 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.759237 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.779197 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.799482 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.819250 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.839503 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.860313 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.879671 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.900102 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.926061 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.939553 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.960210 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.979925 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:44:57 crc kubenswrapper[4853]: I0127 18:44:57.999459 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.019449 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.040400 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.059843 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.080156 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.100454 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.135956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8hv\" (UniqueName: \"kubernetes.io/projected/93cfa4e6-9e7c-4c17-a30f-e8d15f452be7-kube-api-access-7j8hv\") pod \"openshift-config-operator-7777fb866f-4mwhw\" (UID: \"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.154705 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tjx\" (UniqueName: \"kubernetes.io/projected/58200e7b-a0e9-47ba-8581-42878da87f40-kube-api-access-t4tjx\") pod \"route-controller-manager-6576b87f9c-mp44v\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.160031 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.179487 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.199753 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.219492 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.245809 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.259836 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.280569 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.300190 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.335075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgtsj\" (UniqueName: \"kubernetes.io/projected/c16214d7-8024-43ad-8394-ee95539c3093-kube-api-access-xgtsj\") pod \"console-operator-58897d9998-zjcpp\" (UID: \"c16214d7-8024-43ad-8394-ee95539c3093\") " pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.343720 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.352459 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.355326 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgwl\" (UniqueName: \"kubernetes.io/projected/b378b1b0-657f-420a-8666-86edfeb38a96-kube-api-access-wxgwl\") pod \"cluster-samples-operator-665b6dd947-r2vkh\" (UID: \"b378b1b0-657f-420a-8666-86edfeb38a96\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.359977 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.381278 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.393326 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.400175 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.400216 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.423396 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.448916 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.460171 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.481256 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.499766 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.524870 4853 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.539836 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.560734 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.564102 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjcpp"] Jan 27 18:44:58 crc kubenswrapper[4853]: W0127 18:44:58.575563 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc16214d7_8024_43ad_8394_ee95539c3093.slice/crio-7719a72f4dfe42cc151ad30de33d193336d4863a59b96bd9cfe05c258bc09c52 WatchSource:0}: Error finding container 7719a72f4dfe42cc151ad30de33d193336d4863a59b96bd9cfe05c258bc09c52: Status 404 returned error can't find the container with id 7719a72f4dfe42cc151ad30de33d193336d4863a59b96bd9cfe05c258bc09c52 Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.579958 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.586222 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v"] Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.600225 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.616373 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh"] Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.619924 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.640148 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.649086 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw"] Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.659730 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:44:58 crc kubenswrapper[4853]: W0127 18:44:58.660755 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93cfa4e6_9e7c_4c17_a30f_e8d15f452be7.slice/crio-b499727734016b9f9edc2254629b722c82137312b809ab0476c01459d35c619a WatchSource:0}: Error finding container b499727734016b9f9edc2254629b722c82137312b809ab0476c01459d35c619a: Status 404 returned error can't find the container with id b499727734016b9f9edc2254629b722c82137312b809ab0476c01459d35c619a Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.680558 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.698504 4853 request.go:700] Waited for 1.889735078s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.717297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njpm9\" (UniqueName: \"kubernetes.io/projected/53cc9731-1ede-4ad3-b2e7-730e605a1a21-kube-api-access-njpm9\") pod \"kube-storage-version-migrator-operator-b67b599dd-znwgm\" (UID: \"53cc9731-1ede-4ad3-b2e7-730e605a1a21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.733415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwjc\" (UniqueName: \"kubernetes.io/projected/6595832a-fc60-447b-826f-ba4eb83689fb-kube-api-access-fdwjc\") pod \"openshift-apiserver-operator-796bbdcf4f-lr7dh\" (UID: \"6595832a-fc60-447b-826f-ba4eb83689fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.754506 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57mf\" (UniqueName: \"kubernetes.io/projected/e4e06c17-76a1-49b2-994b-bf53488b14a9-kube-api-access-b57mf\") pod \"authentication-operator-69f744f599-8q4sj\" (UID: \"e4e06c17-76a1-49b2-994b-bf53488b14a9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.776744 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgg9c\" (UniqueName: \"kubernetes.io/projected/61cb7436-6c15-48a6-a8b8-006f5a52f338-kube-api-access-hgg9c\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.794476 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61cb7436-6c15-48a6-a8b8-006f5a52f338-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n869z\" (UID: \"61cb7436-6c15-48a6-a8b8-006f5a52f338\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.802462 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.816874 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnzb\" (UniqueName: \"kubernetes.io/projected/0a656e98-1ed0-4b1a-8352-4038844a558a-kube-api-access-vpnzb\") pod \"etcd-operator-b45778765-hscjm\" (UID: \"0a656e98-1ed0-4b1a-8352-4038844a558a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.835210 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d5rk\" (UniqueName: \"kubernetes.io/projected/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-kube-api-access-4d5rk\") pod \"oauth-openshift-558db77b4-ltskb\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.847531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" event={"ID":"58200e7b-a0e9-47ba-8581-42878da87f40","Type":"ContainerStarted","Data":"d783566aec88ec506250b521a4e991a13c79910737751d6209d7da083b3dc7bf"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.847584 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" event={"ID":"58200e7b-a0e9-47ba-8581-42878da87f40","Type":"ContainerStarted","Data":"83a0b5f4f8869561de9cc4d6190147f92f4a149e63547251fa9f6afa607820a4"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.847867 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.849042 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mp44v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.849102 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.851824 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" event={"ID":"b378b1b0-657f-420a-8666-86edfeb38a96","Type":"ContainerStarted","Data":"430c5e55ff20c7ee3eb8bac84069c4e0e60f0ae49842af043176751dd1963f2f"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.851869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" event={"ID":"b378b1b0-657f-420a-8666-86edfeb38a96","Type":"ContainerStarted","Data":"65edeaaefb5ad514159d63d2bc8b37c5b6aa50c037bb3e644d60f241b9336362"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.853547 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" event={"ID":"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7","Type":"ContainerStarted","Data":"7f50e245fedd542eb8996b6108b03879290b643aab6511fc55b4a7dd6d233e1f"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.853577 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" event={"ID":"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7","Type":"ContainerStarted","Data":"b499727734016b9f9edc2254629b722c82137312b809ab0476c01459d35c619a"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.855221 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" event={"ID":"c16214d7-8024-43ad-8394-ee95539c3093","Type":"ContainerStarted","Data":"c1ad8ada6156e484fc4a16979b184a229fae51622146f525edde476d60874d19"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.855254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" event={"ID":"c16214d7-8024-43ad-8394-ee95539c3093","Type":"ContainerStarted","Data":"7719a72f4dfe42cc151ad30de33d193336d4863a59b96bd9cfe05c258bc09c52"} Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.855880 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.858410 4853 patch_prober.go:28] interesting pod/console-operator-58897d9998-zjcpp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.858481 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" podUID="c16214d7-8024-43ad-8394-ee95539c3093" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.868095 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.868832 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g64p\" (UniqueName: \"kubernetes.io/projected/be5a36ff-f665-4468-b7ae-8a443f0164e8-kube-api-access-6g64p\") pod \"downloads-7954f5f757-9gqxt\" (UID: \"be5a36ff-f665-4468-b7ae-8a443f0164e8\") " pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.873297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spr6t\" (UniqueName: \"kubernetes.io/projected/6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0-kube-api-access-spr6t\") pod \"dns-operator-744455d44c-xnnk5\" (UID: \"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.895101 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kds22\" (UniqueName: \"kubernetes.io/projected/f06b685a-8035-4bac-88d3-d092b6df21e4-kube-api-access-kds22\") pod \"apiserver-7bbb656c7d-74rmt\" (UID: \"f06b685a-8035-4bac-88d3-d092b6df21e4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.923976 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f81e143-f570-4ea2-837d-f9a1dc205d9c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zpn7p\" (UID: \"2f81e143-f570-4ea2-837d-f9a1dc205d9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.945734 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdtw\" (UniqueName: \"kubernetes.io/projected/6bd6880d-6581-4cca-8eb8-9acb80689e9e-kube-api-access-7zdtw\") pod \"apiserver-76f77b778f-tqzdv\" (UID: \"6bd6880d-6581-4cca-8eb8-9acb80689e9e\") " pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.957300 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpnv\" (UniqueName: \"kubernetes.io/projected/2bd6e097-af15-41a1-9ab2-a4e79adef815-kube-api-access-bzpnv\") pod \"console-f9d7485db-9vd4d\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.965074 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.974870 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8q4sj"] Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.975179 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" Jan 27 18:44:58 crc kubenswrapper[4853]: I0127 18:44:58.991547 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.000182 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" Jan 27 18:44:59 crc kubenswrapper[4853]: W0127 18:44:59.003237 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4e06c17_76a1_49b2_994b_bf53488b14a9.slice/crio-7d0673c86e6d828142ae7bdccbb27933b25c173503780f93e68dd937095f512e WatchSource:0}: Error finding container 7d0673c86e6d828142ae7bdccbb27933b25c173503780f93e68dd937095f512e: Status 404 returned error can't find the container with id 7d0673c86e6d828142ae7bdccbb27933b25c173503780f93e68dd937095f512e Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.038836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgthx\" (UniqueName: \"kubernetes.io/projected/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-kube-api-access-dgthx\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.038923 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-trusted-ca\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.038948 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmmwk\" (UniqueName: \"kubernetes.io/projected/80fdceac-6136-4c48-a96f-3243f5416b10-kube-api-access-hmmwk\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-bound-sa-token\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039162 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/80fdceac-6136-4c48-a96f-3243f5416b10-machine-approver-tls\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039259 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1a45ce-530f-4492-a7e2-9432e194001d-config\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039285 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80fdceac-6136-4c48-a96f-3243f5416b10-config\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb1a45ce-530f-4492-a7e2-9432e194001d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039374 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-kube-api-access-scl6q\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039441 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039506 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-tls\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7c9b9f7-1d12-4e77-a47f-8cb601836611-ca-trust-extracted\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039570 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039613 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8ed842-012e-42f9-b38e-c040f2e36ad6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039659 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-certificates\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8ed842-012e-42f9-b38e-c040f2e36ad6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039697 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7c9b9f7-1d12-4e77-a47f-8cb601836611-installation-pull-secrets\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.039981 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f33872-609b-4d47-ab31-3315051b1414-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.040002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzmh8\" (UniqueName: \"kubernetes.io/projected/04f33872-609b-4d47-ab31-3315051b1414-kube-api-access-jzmh8\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.041315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb1a45ce-530f-4492-a7e2-9432e194001d-images\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.041405 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:59.541389021 +0000 UTC m=+142.003931985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.041478 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80fdceac-6136-4c48-a96f-3243f5416b10-auth-proxy-config\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.041547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z246l\" (UniqueName: \"kubernetes.io/projected/bb1a45ce-530f-4492-a7e2-9432e194001d-kube-api-access-z246l\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.041575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.041660 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f33872-609b-4d47-ab31-3315051b1414-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.041739 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8ed842-012e-42f9-b38e-c040f2e36ad6-config\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.052874 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.070179 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.080873 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:44:59 crc kubenswrapper[4853]: W0127 18:44:59.095535 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6595832a_fc60_447b_826f_ba4eb83689fb.slice/crio-33d58218f90965ae01dc53ae9666598f405ab7b99f70b83ab0afbcb63eec6240 WatchSource:0}: Error finding container 33d58218f90965ae01dc53ae9666598f405ab7b99f70b83ab0afbcb63eec6240: Status 404 returned error can't find the container with id 33d58218f90965ae01dc53ae9666598f405ab7b99f70b83ab0afbcb63eec6240 Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.140458 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144074 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-client-ca\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144443 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7df8\" (UniqueName: \"kubernetes.io/projected/24efc8ab-a03a-411f-8441-454cae46ede9-kube-api-access-q7df8\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6a41d3b-0671-4105-9f35-4d6c72074c5d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-socket-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144512 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/06c96982-0b5d-4214-9d42-1b06ff771366-node-bootstrap-token\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144535 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98053767-f7fa-4a83-a094-a96482717baf-config\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144595 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80fdceac-6136-4c48-a96f-3243f5416b10-config\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24efc8ab-a03a-411f-8441-454cae46ede9-proxy-tls\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-apiservice-cert\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-signing-key\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144810 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-signing-cabundle\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144834 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7c9b9f7-1d12-4e77-a47f-8cb601836611-ca-trust-extracted\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxn9t\" (UniqueName: \"kubernetes.io/projected/ebff9743-c884-4057-9b26-505cb4b8dca7-kube-api-access-nxn9t\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.144891 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:59.644870379 +0000 UTC m=+142.107413262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.144949 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a683af2-9c78-4c3b-993f-f4b54b815f32-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qbgzv\" (UID: \"0a683af2-9c78-4c3b-993f-f4b54b815f32\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145141 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8ed842-012e-42f9-b38e-c040f2e36ad6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-certificates\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145196 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8ed842-012e-42f9-b38e-c040f2e36ad6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-registration-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145238 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-secret-volume\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7c9b9f7-1d12-4e77-a47f-8cb601836611-installation-pull-secrets\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145377 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zhp\" (UniqueName: \"kubernetes.io/projected/13bcd7a7-769a-4324-964a-874eb1fbbd1e-kube-api-access-b6zhp\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145462 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb1a45ce-530f-4492-a7e2-9432e194001d-images\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.145489 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:59.645472437 +0000 UTC m=+142.108015450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145519 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6a41d3b-0671-4105-9f35-4d6c72074c5d-config\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldhg\" (UniqueName: \"kubernetes.io/projected/68d8bd87-80e1-4c90-8541-367d0a676f73-kube-api-access-8ldhg\") pod \"migrator-59844c95c7-gp2qn\" (UID: \"68d8bd87-80e1-4c90-8541-367d0a676f73\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145569 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98053767-f7fa-4a83-a094-a96482717baf-serving-cert\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145605 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-config\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebff9743-c884-4057-9b26-505cb4b8dca7-metrics-tls\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145653 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z246l\" (UniqueName: \"kubernetes.io/projected/bb1a45ce-530f-4492-a7e2-9432e194001d-kube-api-access-z246l\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145773 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-srv-cert\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145795 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/06c96982-0b5d-4214-9d42-1b06ff771366-certs\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145836 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f33872-609b-4d47-ab31-3315051b1414-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145905 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8ed842-012e-42f9-b38e-c040f2e36ad6-config\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb428fe7-0d8c-4f25-b377-880388daf6aa-service-ca-bundle\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.145983 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-default-certificate\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146016 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gpm\" (UniqueName: \"kubernetes.io/projected/cb428fe7-0d8c-4f25-b377-880388daf6aa-kube-api-access-s9gpm\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146037 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13bcd7a7-769a-4324-964a-874eb1fbbd1e-profile-collector-cert\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146069 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxpbj\" (UniqueName: \"kubernetes.io/projected/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-kube-api-access-xxpbj\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146139 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8glq\" (UniqueName: \"kubernetes.io/projected/5244d6c6-721d-44cf-8175-48408b3780b0-kube-api-access-f8glq\") pod \"control-plane-machine-set-operator-78cbb6b69f-t4wdl\" (UID: \"5244d6c6-721d-44cf-8175-48408b3780b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146200 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4434ecce-14db-446c-900b-3ebf84bbe25c-proxy-tls\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146224 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13bcd7a7-769a-4324-964a-874eb1fbbd1e-srv-cert\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146301 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/80fdceac-6136-4c48-a96f-3243f5416b10-machine-approver-tls\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146325 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-stats-auth\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146386 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80fdceac-6136-4c48-a96f-3243f5416b10-config\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb1a45ce-530f-4492-a7e2-9432e194001d-images\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.146436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1a45ce-530f-4492-a7e2-9432e194001d-config\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.147428 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8ed842-012e-42f9-b38e-c040f2e36ad6-config\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.147984 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb1a45ce-530f-4492-a7e2-9432e194001d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148022 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5244d6c6-721d-44cf-8175-48408b3780b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t4wdl\" (UID: \"5244d6c6-721d-44cf-8175-48408b3780b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-kube-api-access-scl6q\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-mountpoint-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148101 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dhh9\" (UniqueName: \"kubernetes.io/projected/59420df1-93c7-4908-aa3b-3f3c61efdb18-kube-api-access-9dhh9\") pod \"multus-admission-controller-857f4d67dd-ww5mj\" (UID: \"59420df1-93c7-4908-aa3b-3f3c61efdb18\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148143 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-tls\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148165 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-webhook-cert\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zv4\" (UniqueName: \"kubernetes.io/projected/5a20718d-a359-4670-86a3-4f32a2b11f53-kube-api-access-79zv4\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148258 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7cbf\" (UniqueName: \"kubernetes.io/projected/0a683af2-9c78-4c3b-993f-f4b54b815f32-kube-api-access-v7cbf\") pod \"package-server-manager-789f6589d5-qbgzv\" (UID: \"0a683af2-9c78-4c3b-993f-f4b54b815f32\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7c9b9f7-1d12-4e77-a47f-8cb601836611-ca-trust-extracted\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148353 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vvzj\" (UniqueName: \"kubernetes.io/projected/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-kube-api-access-4vvzj\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148387 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5jlv\" (UniqueName: \"kubernetes.io/projected/86af8168-4922-4d5d-adee-38d4d88d55ca-kube-api-access-x5jlv\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148421 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f33872-609b-4d47-ab31-3315051b1414-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzmh8\" (UniqueName: \"kubernetes.io/projected/04f33872-609b-4d47-ab31-3315051b1414-kube-api-access-jzmh8\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148458 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-csi-data-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148488 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4434ecce-14db-446c-900b-3ebf84bbe25c-images\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80fdceac-6136-4c48-a96f-3243f5416b10-auth-proxy-config\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkggn\" (UniqueName: \"kubernetes.io/projected/4434ecce-14db-446c-900b-3ebf84bbe25c-kube-api-access-kkggn\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148571 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-tmpfs\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.152986 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80fdceac-6136-4c48-a96f-3243f5416b10-auth-proxy-config\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.153770 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8ed842-012e-42f9-b38e-c040f2e36ad6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.148736 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4mfq\" (UniqueName: \"kubernetes.io/projected/70bdd69e-31db-4a48-811b-0f665647441a-kube-api-access-m4mfq\") pod \"ingress-canary-lddqg\" (UID: \"70bdd69e-31db-4a48-811b-0f665647441a\") " pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154334 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebff9743-c884-4057-9b26-505cb4b8dca7-config-volume\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154358 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e624c88-c6c1-4c35-985b-264173a9abcd-serving-cert\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4434ecce-14db-446c-900b-3ebf84bbe25c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgxkj\" (UniqueName: \"kubernetes.io/projected/98053767-f7fa-4a83-a094-a96482717baf-kube-api-access-bgxkj\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154430 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70bdd69e-31db-4a48-811b-0f665647441a-cert\") pod \"ingress-canary-lddqg\" (UID: \"70bdd69e-31db-4a48-811b-0f665647441a\") " pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.154918 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24efc8ab-a03a-411f-8441-454cae46ede9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155058 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-config-volume\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hg7\" (UniqueName: \"kubernetes.io/projected/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-kube-api-access-c5hg7\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155105 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbzzd\" (UniqueName: \"kubernetes.io/projected/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-kube-api-access-wbzzd\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155493 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzf49\" (UniqueName: \"kubernetes.io/projected/6e624c88-c6c1-4c35-985b-264173a9abcd-kube-api-access-vzf49\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155519 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59420df1-93c7-4908-aa3b-3f3c61efdb18-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ww5mj\" (UID: \"59420df1-93c7-4908-aa3b-3f3c61efdb18\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155557 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgthx\" (UniqueName: \"kubernetes.io/projected/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-kube-api-access-dgthx\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155577 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a41d3b-0671-4105-9f35-4d6c72074c5d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155608 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-trusted-ca\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155626 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmmwk\" (UniqueName: \"kubernetes.io/projected/80fdceac-6136-4c48-a96f-3243f5416b10-kube-api-access-hmmwk\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155643 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-plugins-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-bound-sa-token\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-metrics-certs\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.155762 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mrl\" (UniqueName: \"kubernetes.io/projected/06c96982-0b5d-4214-9d42-1b06ff771366-kube-api-access-m9mrl\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.156015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/80fdceac-6136-4c48-a96f-3243f5416b10-machine-approver-tls\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.156773 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.157914 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1a45ce-530f-4492-a7e2-9432e194001d-config\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.159339 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f33872-609b-4d47-ab31-3315051b1414-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.160613 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7c9b9f7-1d12-4e77-a47f-8cb601836611-installation-pull-secrets\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.161549 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-trusted-ca\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.162210 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f33872-609b-4d47-ab31-3315051b1414-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.162299 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-certificates\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.164928 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-tls\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.165530 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb1a45ce-530f-4492-a7e2-9432e194001d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.176957 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.180148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.188832 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.195601 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.197531 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8ed842-012e-42f9-b38e-c040f2e36ad6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pp2rw\" (UID: \"2d8ed842-012e-42f9-b38e-c040f2e36ad6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.215677 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n869z"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.220032 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z246l\" (UniqueName: \"kubernetes.io/projected/bb1a45ce-530f-4492-a7e2-9432e194001d-kube-api-access-z246l\") pod \"machine-api-operator-5694c8668f-kmkjx\" (UID: \"bb1a45ce-530f-4492-a7e2-9432e194001d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: W0127 18:44:59.231047 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cb7436_6c15_48a6_a8b8_006f5a52f338.slice/crio-7fbdba8dc4577f07e2629e5cf32861542aecb38e898e03c5a6183a4120c2428e WatchSource:0}: Error finding container 7fbdba8dc4577f07e2629e5cf32861542aecb38e898e03c5a6183a4120c2428e: Status 404 returned error can't find the container with id 7fbdba8dc4577f07e2629e5cf32861542aecb38e898e03c5a6183a4120c2428e Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.235951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzmh8\" (UniqueName: \"kubernetes.io/projected/04f33872-609b-4d47-ab31-3315051b1414-kube-api-access-jzmh8\") pod \"openshift-controller-manager-operator-756b6f6bc6-mlfr4\" (UID: \"04f33872-609b-4d47-ab31-3315051b1414\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.246917 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.256836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-bound-sa-token\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.263326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4434ecce-14db-446c-900b-3ebf84bbe25c-images\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265082 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkggn\" (UniqueName: \"kubernetes.io/projected/4434ecce-14db-446c-900b-3ebf84bbe25c-kube-api-access-kkggn\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265100 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-tmpfs\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4mfq\" (UniqueName: \"kubernetes.io/projected/70bdd69e-31db-4a48-811b-0f665647441a-kube-api-access-m4mfq\") pod \"ingress-canary-lddqg\" (UID: \"70bdd69e-31db-4a48-811b-0f665647441a\") " pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265165 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebff9743-c884-4057-9b26-505cb4b8dca7-config-volume\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265181 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4434ecce-14db-446c-900b-3ebf84bbe25c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgxkj\" (UniqueName: \"kubernetes.io/projected/98053767-f7fa-4a83-a094-a96482717baf-kube-api-access-bgxkj\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265215 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e624c88-c6c1-4c35-985b-264173a9abcd-serving-cert\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265241 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70bdd69e-31db-4a48-811b-0f665647441a-cert\") pod \"ingress-canary-lddqg\" (UID: \"70bdd69e-31db-4a48-811b-0f665647441a\") " pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265257 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24efc8ab-a03a-411f-8441-454cae46ede9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265293 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-config-volume\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5hg7\" (UniqueName: \"kubernetes.io/projected/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-kube-api-access-c5hg7\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbzzd\" (UniqueName: \"kubernetes.io/projected/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-kube-api-access-wbzzd\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265351 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzf49\" (UniqueName: \"kubernetes.io/projected/6e624c88-c6c1-4c35-985b-264173a9abcd-kube-api-access-vzf49\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265368 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59420df1-93c7-4908-aa3b-3f3c61efdb18-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ww5mj\" (UID: \"59420df1-93c7-4908-aa3b-3f3c61efdb18\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a41d3b-0671-4105-9f35-4d6c72074c5d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265419 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-plugins-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-metrics-certs\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265451 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mrl\" (UniqueName: \"kubernetes.io/projected/06c96982-0b5d-4214-9d42-1b06ff771366-kube-api-access-m9mrl\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-client-ca\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7df8\" (UniqueName: \"kubernetes.io/projected/24efc8ab-a03a-411f-8441-454cae46ede9-kube-api-access-q7df8\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265507 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6a41d3b-0671-4105-9f35-4d6c72074c5d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265522 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-socket-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/06c96982-0b5d-4214-9d42-1b06ff771366-node-bootstrap-token\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265552 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98053767-f7fa-4a83-a094-a96482717baf-config\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265574 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24efc8ab-a03a-411f-8441-454cae46ede9-proxy-tls\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265593 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-apiservice-cert\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265611 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-signing-key\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265631 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-signing-cabundle\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxn9t\" (UniqueName: \"kubernetes.io/projected/ebff9743-c884-4057-9b26-505cb4b8dca7-kube-api-access-nxn9t\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265678 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a683af2-9c78-4c3b-993f-f4b54b815f32-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qbgzv\" (UID: \"0a683af2-9c78-4c3b-993f-f4b54b815f32\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265696 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-secret-volume\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265714 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-registration-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265733 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zhp\" (UniqueName: \"kubernetes.io/projected/13bcd7a7-769a-4324-964a-874eb1fbbd1e-kube-api-access-b6zhp\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265754 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6a41d3b-0671-4105-9f35-4d6c72074c5d-config\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265770 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldhg\" (UniqueName: \"kubernetes.io/projected/68d8bd87-80e1-4c90-8541-367d0a676f73-kube-api-access-8ldhg\") pod \"migrator-59844c95c7-gp2qn\" (UID: \"68d8bd87-80e1-4c90-8541-367d0a676f73\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265785 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98053767-f7fa-4a83-a094-a96482717baf-serving-cert\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265804 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-config\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebff9743-c884-4057-9b26-505cb4b8dca7-metrics-tls\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265843 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/06c96982-0b5d-4214-9d42-1b06ff771366-certs\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265880 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-srv-cert\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb428fe7-0d8c-4f25-b377-880388daf6aa-service-ca-bundle\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265925 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-default-certificate\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gpm\" (UniqueName: \"kubernetes.io/projected/cb428fe7-0d8c-4f25-b377-880388daf6aa-kube-api-access-s9gpm\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13bcd7a7-769a-4324-964a-874eb1fbbd1e-profile-collector-cert\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265970 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxpbj\" (UniqueName: \"kubernetes.io/projected/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-kube-api-access-xxpbj\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.265987 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8glq\" (UniqueName: \"kubernetes.io/projected/5244d6c6-721d-44cf-8175-48408b3780b0-kube-api-access-f8glq\") pod \"control-plane-machine-set-operator-78cbb6b69f-t4wdl\" (UID: \"5244d6c6-721d-44cf-8175-48408b3780b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4434ecce-14db-446c-900b-3ebf84bbe25c-proxy-tls\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13bcd7a7-769a-4324-964a-874eb1fbbd1e-srv-cert\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266038 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-stats-auth\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266071 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266093 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5244d6c6-721d-44cf-8175-48408b3780b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t4wdl\" (UID: \"5244d6c6-721d-44cf-8175-48408b3780b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266106 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4434ecce-14db-446c-900b-3ebf84bbe25c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266139 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-mountpoint-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266195 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-mountpoint-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dhh9\" (UniqueName: \"kubernetes.io/projected/59420df1-93c7-4908-aa3b-3f3c61efdb18-kube-api-access-9dhh9\") pod \"multus-admission-controller-857f4d67dd-ww5mj\" (UID: \"59420df1-93c7-4908-aa3b-3f3c61efdb18\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-webhook-cert\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266281 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79zv4\" (UniqueName: \"kubernetes.io/projected/5a20718d-a359-4670-86a3-4f32a2b11f53-kube-api-access-79zv4\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7cbf\" (UniqueName: \"kubernetes.io/projected/0a683af2-9c78-4c3b-993f-f4b54b815f32-kube-api-access-v7cbf\") pod \"package-server-manager-789f6589d5-qbgzv\" (UID: \"0a683af2-9c78-4c3b-993f-f4b54b815f32\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266362 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vvzj\" (UniqueName: \"kubernetes.io/projected/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-kube-api-access-4vvzj\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266389 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5jlv\" (UniqueName: \"kubernetes.io/projected/86af8168-4922-4d5d-adee-38d4d88d55ca-kube-api-access-x5jlv\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.266455 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:59.766431339 +0000 UTC m=+142.228974292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266511 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-csi-data-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-csi-data-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.266850 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-tmpfs\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.267346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4434ecce-14db-446c-900b-3ebf84bbe25c-images\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.268007 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb428fe7-0d8c-4f25-b377-880388daf6aa-service-ca-bundle\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.268735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebff9743-c884-4057-9b26-505cb4b8dca7-config-volume\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.271315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.273281 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-config\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.279187 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13bcd7a7-769a-4324-964a-874eb1fbbd1e-profile-collector-cert\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.280971 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-default-certificate\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.281089 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-webhook-cert\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.282631 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98053767-f7fa-4a83-a094-a96482717baf-serving-cert\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.282912 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-registration-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.283086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4434ecce-14db-446c-900b-3ebf84bbe25c-proxy-tls\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.283708 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6a41d3b-0671-4105-9f35-4d6c72074c5d-config\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.284617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e624c88-c6c1-4c35-985b-264173a9abcd-serving-cert\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.286658 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.286966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98053767-f7fa-4a83-a094-a96482717baf-config\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.287159 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59420df1-93c7-4908-aa3b-3f3c61efdb18-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ww5mj\" (UID: \"59420df1-93c7-4908-aa3b-3f3c61efdb18\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.287279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a683af2-9c78-4c3b-993f-f4b54b815f32-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qbgzv\" (UID: \"0a683af2-9c78-4c3b-993f-f4b54b815f32\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.287458 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebff9743-c884-4057-9b26-505cb4b8dca7-metrics-tls\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.288272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-plugins-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.288606 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a20718d-a359-4670-86a3-4f32a2b11f53-socket-dir\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.288899 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-client-ca\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.289721 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-config-volume\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.289749 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-signing-cabundle\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.290380 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.291057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24efc8ab-a03a-411f-8441-454cae46ede9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.291114 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-srv-cert\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.291478 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-metrics-certs\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.291686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70bdd69e-31db-4a48-811b-0f665647441a-cert\") pod \"ingress-canary-lddqg\" (UID: \"70bdd69e-31db-4a48-811b-0f665647441a\") " pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.292340 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-apiservice-cert\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.292796 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-secret-volume\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.293382 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5244d6c6-721d-44cf-8175-48408b3780b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t4wdl\" (UID: \"5244d6c6-721d-44cf-8175-48408b3780b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.293749 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/06c96982-0b5d-4214-9d42-1b06ff771366-node-bootstrap-token\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.294935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/06c96982-0b5d-4214-9d42-1b06ff771366-certs\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.295621 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgthx\" (UniqueName: \"kubernetes.io/projected/39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0-kube-api-access-dgthx\") pod \"cluster-image-registry-operator-dc59b4c8b-nnmnh\" (UID: \"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.297569 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.298023 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13bcd7a7-769a-4324-964a-874eb1fbbd1e-srv-cert\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.298451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a41d3b-0671-4105-9f35-4d6c72074c5d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.300699 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-signing-key\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.300957 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24efc8ab-a03a-411f-8441-454cae46ede9-proxy-tls\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.302150 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb428fe7-0d8c-4f25-b377-880388daf6aa-stats-auth\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.304720 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.313562 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmmwk\" (UniqueName: \"kubernetes.io/projected/80fdceac-6136-4c48-a96f-3243f5416b10-kube-api-access-hmmwk\") pod \"machine-approver-56656f9798-qpjt6\" (UID: \"80fdceac-6136-4c48-a96f-3243f5416b10\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.326049 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hscjm"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.327523 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xnnk5"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.333389 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.335735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-kube-api-access-scl6q\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.361833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxn9t\" (UniqueName: \"kubernetes.io/projected/ebff9743-c884-4057-9b26-505cb4b8dca7-kube-api-access-nxn9t\") pod \"dns-default-52vxj\" (UID: \"ebff9743-c884-4057-9b26-505cb4b8dca7\") " pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.367409 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.367994 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:59.867979351 +0000 UTC m=+142.330522234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: W0127 18:44:59.379470 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a656e98_1ed0_4b1a_8352_4038844a558a.slice/crio-fc2f8453226ddc8e716c243966eacd4153e57b0078ea76a755b55e5de9a57c84 WatchSource:0}: Error finding container fc2f8453226ddc8e716c243966eacd4153e57b0078ea76a755b55e5de9a57c84: Status 404 returned error can't find the container with id fc2f8453226ddc8e716c243966eacd4153e57b0078ea76a755b55e5de9a57c84 Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.382938 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgxkj\" (UniqueName: \"kubernetes.io/projected/98053767-f7fa-4a83-a094-a96482717baf-kube-api-access-bgxkj\") pod \"service-ca-operator-777779d784-4kgbm\" (UID: \"98053767-f7fa-4a83-a094-a96482717baf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.392326 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.395183 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.402936 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkggn\" (UniqueName: \"kubernetes.io/projected/4434ecce-14db-446c-900b-3ebf84bbe25c-kube-api-access-kkggn\") pod \"machine-config-operator-74547568cd-mbcwm\" (UID: \"4434ecce-14db-446c-900b-3ebf84bbe25c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.418765 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.424740 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5jlv\" (UniqueName: \"kubernetes.io/projected/86af8168-4922-4d5d-adee-38d4d88d55ca-kube-api-access-x5jlv\") pod \"marketplace-operator-79b997595-bxk4b\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.426372 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.438797 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4mfq\" (UniqueName: \"kubernetes.io/projected/70bdd69e-31db-4a48-811b-0f665647441a-kube-api-access-m4mfq\") pod \"ingress-canary-lddqg\" (UID: \"70bdd69e-31db-4a48-811b-0f665647441a\") " pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.446171 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lddqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.462110 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dhh9\" (UniqueName: \"kubernetes.io/projected/59420df1-93c7-4908-aa3b-3f3c61efdb18-kube-api-access-9dhh9\") pod \"multus-admission-controller-857f4d67dd-ww5mj\" (UID: \"59420df1-93c7-4908-aa3b-3f3c61efdb18\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.472783 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.473910 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:59.973888889 +0000 UTC m=+142.436431782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.482167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gpm\" (UniqueName: \"kubernetes.io/projected/cb428fe7-0d8c-4f25-b377-880388daf6aa-kube-api-access-s9gpm\") pod \"router-default-5444994796-fhmft\" (UID: \"cb428fe7-0d8c-4f25-b377-880388daf6aa\") " pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.487835 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.488757 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-52vxj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.501940 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7cbf\" (UniqueName: \"kubernetes.io/projected/0a683af2-9c78-4c3b-993f-f4b54b815f32-kube-api-access-v7cbf\") pod \"package-server-manager-789f6589d5-qbgzv\" (UID: \"0a683af2-9c78-4c3b-993f-f4b54b815f32\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.520763 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79zv4\" (UniqueName: \"kubernetes.io/projected/5a20718d-a359-4670-86a3-4f32a2b11f53-kube-api-access-79zv4\") pod \"csi-hostpathplugin-6l6nh\" (UID: \"5a20718d-a359-4670-86a3-4f32a2b11f53\") " pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.539220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxpbj\" (UniqueName: \"kubernetes.io/projected/06a480d9-6aba-4daa-8eb3-7d5e93beeef0-kube-api-access-xxpbj\") pod \"service-ca-9c57cc56f-hfgqg\" (UID: \"06a480d9-6aba-4daa-8eb3-7d5e93beeef0\") " pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.554025 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5hg7\" (UniqueName: \"kubernetes.io/projected/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-kube-api-access-c5hg7\") pod \"collect-profiles-29492310-22ft5\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.575182 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.575559 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.075544583 +0000 UTC m=+142.538087466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.582011 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8glq\" (UniqueName: \"kubernetes.io/projected/5244d6c6-721d-44cf-8175-48408b3780b0-kube-api-access-f8glq\") pod \"control-plane-machine-set-operator-78cbb6b69f-t4wdl\" (UID: \"5244d6c6-721d-44cf-8175-48408b3780b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.599487 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vvzj\" (UniqueName: \"kubernetes.io/projected/93b9b4bd-fa71-40e1-a4f6-16099dd2c84c-kube-api-access-4vvzj\") pod \"olm-operator-6b444d44fb-rrwhm\" (UID: \"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.611957 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.621347 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zhp\" (UniqueName: \"kubernetes.io/projected/13bcd7a7-769a-4324-964a-874eb1fbbd1e-kube-api-access-b6zhp\") pod \"catalog-operator-68c6474976-d6bzk\" (UID: \"13bcd7a7-769a-4324-964a-874eb1fbbd1e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.635916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldhg\" (UniqueName: \"kubernetes.io/projected/68d8bd87-80e1-4c90-8541-367d0a676f73-kube-api-access-8ldhg\") pod \"migrator-59844c95c7-gp2qn\" (UID: \"68d8bd87-80e1-4c90-8541-367d0a676f73\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.638211 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.653957 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.654915 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.661753 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.665496 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltskb"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.668677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbzzd\" (UniqueName: \"kubernetes.io/projected/b0123ef8-ad21-45f5-b5d8-e491c9aa10dd-kube-api-access-wbzzd\") pod \"packageserver-d55dfcdfc-st9cr\" (UID: \"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.671040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.675864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.676222 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.17620518 +0000 UTC m=+142.638748063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.678562 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.685904 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.690780 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzf49\" (UniqueName: \"kubernetes.io/projected/6e624c88-c6c1-4c35-985b-264173a9abcd-kube-api-access-vzf49\") pod \"controller-manager-879f6c89f-kb26l\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.699500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mrl\" (UniqueName: \"kubernetes.io/projected/06c96982-0b5d-4214-9d42-1b06ff771366-kube-api-access-m9mrl\") pod \"machine-config-server-cnjvr\" (UID: \"06c96982-0b5d-4214-9d42-1b06ff771366\") " pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.704417 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.714359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.724889 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.739707 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.747321 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7df8\" (UniqueName: \"kubernetes.io/projected/24efc8ab-a03a-411f-8441-454cae46ede9-kube-api-access-q7df8\") pod \"machine-config-controller-84d6567774-rm987\" (UID: \"24efc8ab-a03a-411f-8441-454cae46ede9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.758062 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9vd4d"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.772173 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.777218 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6a41d3b-0671-4105-9f35-4d6c72074c5d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wnqmk\" (UID: \"b6a41d3b-0671-4105-9f35-4d6c72074c5d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.777693 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.778228 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.278211515 +0000 UTC m=+142.740754398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.783063 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cnjvr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.784538 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9gqxt"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.801225 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.878524 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.878969 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.378954253 +0000 UTC m=+142.841497136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.884382 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" event={"ID":"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708","Type":"ContainerStarted","Data":"a86dc6a11d163f749fe7d3ea19e8f851274ffbfeaba1cf11dfd5a741ad1c934d"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.893874 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" event={"ID":"61cb7436-6c15-48a6-a8b8-006f5a52f338","Type":"ContainerStarted","Data":"ca723265b294dd94c223e574b4e3c79b8c3ca8d08015447c7f765bfe2a2556db"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.893926 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" event={"ID":"61cb7436-6c15-48a6-a8b8-006f5a52f338","Type":"ContainerStarted","Data":"7fbdba8dc4577f07e2629e5cf32861542aecb38e898e03c5a6183a4120c2428e"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.901239 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" event={"ID":"80fdceac-6136-4c48-a96f-3243f5416b10","Type":"ContainerStarted","Data":"ac109f5a5ed3b33391647259a3655cb6d66c406c04ce4ec35656c73b480f6357"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.904812 4853 generic.go:334] "Generic (PLEG): container finished" podID="93cfa4e6-9e7c-4c17-a30f-e8d15f452be7" containerID="7f50e245fedd542eb8996b6108b03879290b643aab6511fc55b4a7dd6d233e1f" exitCode=0 Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.904964 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" event={"ID":"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7","Type":"ContainerDied","Data":"7f50e245fedd542eb8996b6108b03879290b643aab6511fc55b4a7dd6d233e1f"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.910668 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.920620 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.934215 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" Jan 27 18:44:59 crc kubenswrapper[4853]: W0127 18:44:59.935028 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe5a36ff_f665_4468_b7ae_8a443f0164e8.slice/crio-7eea5691f7dee20ee949db2548d058c8dad7f4c206be383773752efeb8928b1f WatchSource:0}: Error finding container 7eea5691f7dee20ee949db2548d058c8dad7f4c206be383773752efeb8928b1f: Status 404 returned error can't find the container with id 7eea5691f7dee20ee949db2548d058c8dad7f4c206be383773752efeb8928b1f Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.947473 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" event={"ID":"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0","Type":"ContainerStarted","Data":"2d720c1b92f093c6aaf02f4b41f454ad941a8d04d8c3023b65671e392508402f"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.951708 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.951761 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4"] Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.961534 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tqzdv"] Jan 27 18:44:59 crc kubenswrapper[4853]: W0127 18:44:59.964967 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd6e097_af15_41a1_9ab2_a4e79adef815.slice/crio-24fbfe2fbed34ea78798d7354d51dfd8fd3503ba9e7e208c46023f4f485f8c7d WatchSource:0}: Error finding container 24fbfe2fbed34ea78798d7354d51dfd8fd3503ba9e7e208c46023f4f485f8c7d: Status 404 returned error can't find the container with id 24fbfe2fbed34ea78798d7354d51dfd8fd3503ba9e7e208c46023f4f485f8c7d Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.975584 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" event={"ID":"0a656e98-1ed0-4b1a-8352-4038844a558a","Type":"ContainerStarted","Data":"fc2f8453226ddc8e716c243966eacd4153e57b0078ea76a755b55e5de9a57c84"} Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.987217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:44:59 crc kubenswrapper[4853]: E0127 18:44:59.987686 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.487670602 +0000 UTC m=+142.950213485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:59 crc kubenswrapper[4853]: I0127 18:44:59.991620 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" event={"ID":"b378b1b0-657f-420a-8666-86edfeb38a96","Type":"ContainerStarted","Data":"6e36063fa7d10049a3c397b80789dc6bdbdfa2565381906c2f5ca3fc667e58e5"} Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.030154 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" event={"ID":"e4e06c17-76a1-49b2-994b-bf53488b14a9","Type":"ContainerStarted","Data":"d0834ee46cdd915c5e16cd3415c753d1c78d2d6760a0734c10e4522769c337a8"} Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.030199 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" event={"ID":"e4e06c17-76a1-49b2-994b-bf53488b14a9","Type":"ContainerStarted","Data":"7d0673c86e6d828142ae7bdccbb27933b25c173503780f93e68dd937095f512e"} Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.031515 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.038961 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.040032 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.042076 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" event={"ID":"53cc9731-1ede-4ad3-b2e7-730e605a1a21","Type":"ContainerStarted","Data":"90485e9f370154f252542e95e1eb375d69e21741220b9fae1630f88e2189ddc4"} Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.047712 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" event={"ID":"6595832a-fc60-447b-826f-ba4eb83689fb","Type":"ContainerStarted","Data":"cbedd9e83a6b0ea038382c74f974cffa33009bc1f2d1b8aef6c68c57be40da83"} Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.047785 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" event={"ID":"6595832a-fc60-447b-826f-ba4eb83689fb","Type":"ContainerStarted","Data":"33d58218f90965ae01dc53ae9666598f405ab7b99f70b83ab0afbcb63eec6240"} Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.049805 4853 patch_prober.go:28] interesting pod/console-operator-58897d9998-zjcpp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.049856 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" podUID="c16214d7-8024-43ad-8394-ee95539c3093" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.050310 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kmkjx"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.056667 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.089765 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.091425 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.591403397 +0000 UTC m=+143.053946360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.170888 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.170928 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.171583 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.171674 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.191364 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.196302 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.696287535 +0000 UTC m=+143.158830418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.282893 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" podStartSLOduration=116.282869155 podStartE2EDuration="1m56.282869155s" podCreationTimestamp="2026-01-27 18:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:00.282109833 +0000 UTC m=+142.744652736" watchObservedRunningTime="2026-01-27 18:45:00.282869155 +0000 UTC m=+142.745412038" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.293057 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.293269 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczwc\" (UniqueName: \"kubernetes.io/projected/41434dfd-3fc3-4184-a911-506620889ebe-kube-api-access-kczwc\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.293320 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.793298256 +0000 UTC m=+143.255841149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.293393 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.293425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41434dfd-3fc3-4184-a911-506620889ebe-config-volume\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.293453 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41434dfd-3fc3-4184-a911-506620889ebe-secret-volume\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.293831 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.793819821 +0000 UTC m=+143.256362704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.395638 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.396100 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41434dfd-3fc3-4184-a911-506620889ebe-config-volume\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.396169 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41434dfd-3fc3-4184-a911-506620889ebe-secret-volume\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.396314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczwc\" (UniqueName: \"kubernetes.io/projected/41434dfd-3fc3-4184-a911-506620889ebe-kube-api-access-kczwc\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.397607 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41434dfd-3fc3-4184-a911-506620889ebe-config-volume\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.397743 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:00.897716171 +0000 UTC m=+143.360259054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.409148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41434dfd-3fc3-4184-a911-506620889ebe-secret-volume\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.416211 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.419331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczwc\" (UniqueName: \"kubernetes.io/projected/41434dfd-3fc3-4184-a911-506620889ebe-kube-api-access-kczwc\") pod \"collect-profiles-29492325-nbfpk\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.503374 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.503659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ww5mj"] Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.503886 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.003872265 +0000 UTC m=+143.466415148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.553487 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.597430 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lddqg"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.604486 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.604676 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.104660345 +0000 UTC m=+143.567203228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.604801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.605193 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.105185681 +0000 UTC m=+143.567728564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.611672 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.631442 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6l6nh"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.689613 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" podStartSLOduration=117.689594608 podStartE2EDuration="1m57.689594608s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:00.689025471 +0000 UTC m=+143.151568374" watchObservedRunningTime="2026-01-27 18:45:00.689594608 +0000 UTC m=+143.152137491" Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.706346 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.706648 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.206621869 +0000 UTC m=+143.669164752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.706828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.707252 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.207236827 +0000 UTC m=+143.669779780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.733171 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-52vxj"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.735611 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.737301 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.738917 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.740999 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.761233 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn"] Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.808257 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.808457 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.308430669 +0000 UTC m=+143.770973552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.808567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.808886 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.308875331 +0000 UTC m=+143.771418284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.836069 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bd6880d_6581_4cca_8eb8_9acb80689e9e.slice/crio-8fb7fcde3a15430e12de5dfac1fa41a0eb15965edf0046be810f72d5f54c0d70 WatchSource:0}: Error finding container 8fb7fcde3a15430e12de5dfac1fa41a0eb15965edf0046be810f72d5f54c0d70: Status 404 returned error can't find the container with id 8fb7fcde3a15430e12de5dfac1fa41a0eb15965edf0046be810f72d5f54c0d70 Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.859814 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b9b4bd_fa71_40e1_a4f6_16099dd2c84c.slice/crio-d560857ee00dc6b566f20999759374585f09964def186161dee1a7f85fabbe6c WatchSource:0}: Error finding container d560857ee00dc6b566f20999759374585f09964def186161dee1a7f85fabbe6c: Status 404 returned error can't find the container with id d560857ee00dc6b566f20999759374585f09964def186161dee1a7f85fabbe6c Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.867252 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59420df1_93c7_4908_aa3b_3f3c61efdb18.slice/crio-aa7c482097d02ac7311581a6acbc5b32b2ef1b173529c0242b37d77c8d383f78 WatchSource:0}: Error finding container aa7c482097d02ac7311581a6acbc5b32b2ef1b173529c0242b37d77c8d383f78: Status 404 returned error can't find the container with id aa7c482097d02ac7311581a6acbc5b32b2ef1b173529c0242b37d77c8d383f78 Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.876347 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a64c5e_945d_4be5_a1af_6c8ee6fa8ee0.slice/crio-c3c06cbd35ebad9ea9d8f4651243081229f14975fb6ed2e1f1dad1139bbd9f49 WatchSource:0}: Error finding container c3c06cbd35ebad9ea9d8f4651243081229f14975fb6ed2e1f1dad1139bbd9f49: Status 404 returned error can't find the container with id c3c06cbd35ebad9ea9d8f4651243081229f14975fb6ed2e1f1dad1139bbd9f49 Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.878975 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a41d3b_0671_4105_9f35_4d6c72074c5d.slice/crio-25d40db8826a3fe720df941b552a94e69a7b623b47e96fa44d32885debe2206f WatchSource:0}: Error finding container 25d40db8826a3fe720df941b552a94e69a7b623b47e96fa44d32885debe2206f: Status 404 returned error can't find the container with id 25d40db8826a3fe720df941b552a94e69a7b623b47e96fa44d32885debe2206f Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.888335 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5244d6c6_721d_44cf_8175_48408b3780b0.slice/crio-db94e9027ec490349e74d87aa0fe95a49ad7b6da4a568eb577eb9bf3f3fe175d WatchSource:0}: Error finding container db94e9027ec490349e74d87aa0fe95a49ad7b6da4a568eb577eb9bf3f3fe175d: Status 404 returned error can't find the container with id db94e9027ec490349e74d87aa0fe95a49ad7b6da4a568eb577eb9bf3f3fe175d Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.891303 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebff9743_c884_4057_9b26_505cb4b8dca7.slice/crio-92140c6ca964b5c3299082a20c4c0c767cd610643b57596e40a87e4271595a8a WatchSource:0}: Error finding container 92140c6ca964b5c3299082a20c4c0c767cd610643b57596e40a87e4271595a8a: Status 404 returned error can't find the container with id 92140c6ca964b5c3299082a20c4c0c767cd610643b57596e40a87e4271595a8a Jan 27 18:45:00 crc kubenswrapper[4853]: W0127 18:45:00.892746 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13bcd7a7_769a_4324_964a_874eb1fbbd1e.slice/crio-83a90a04721ce55fff20df110b2c91ef4a1dd919e78e42cc08ef5cba5fe9161f WatchSource:0}: Error finding container 83a90a04721ce55fff20df110b2c91ef4a1dd919e78e42cc08ef5cba5fe9161f: Status 404 returned error can't find the container with id 83a90a04721ce55fff20df110b2c91ef4a1dd919e78e42cc08ef5cba5fe9161f Jan 27 18:45:00 crc kubenswrapper[4853]: I0127 18:45:00.910739 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:00 crc kubenswrapper[4853]: E0127 18:45:00.911053 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.411039931 +0000 UTC m=+143.873582814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.014191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.014589 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.51457173 +0000 UTC m=+143.977114673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.071040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" event={"ID":"2f81e143-f570-4ea2-837d-f9a1dc205d9c","Type":"ContainerStarted","Data":"233787400a4e810a64471a28c3cac40b844e5fd935835360a1ab6cfc0ef776e0"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.085761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" event={"ID":"5244d6c6-721d-44cf-8175-48408b3780b0","Type":"ContainerStarted","Data":"db94e9027ec490349e74d87aa0fe95a49ad7b6da4a568eb577eb9bf3f3fe175d"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.089240 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" event={"ID":"4434ecce-14db-446c-900b-3ebf84bbe25c","Type":"ContainerStarted","Data":"bbfd49969a024889d21211be40d0d3e0667ed34abf6741cc8095c709eba9d53a"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.116590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" event={"ID":"61cb7436-6c15-48a6-a8b8-006f5a52f338","Type":"ContainerStarted","Data":"3cb8585265dc82fcbc9d8aa48eafaa289fdad579f28d205525c1a495aeb542a5"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.119031 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.119593 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.619576192 +0000 UTC m=+144.082119075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.181305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" event={"ID":"53cc9731-1ede-4ad3-b2e7-730e605a1a21","Type":"ContainerStarted","Data":"1c993d4de9c582c7ad488b7697a73de9abd838af93a23e8ec956292dd066b5c3"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.184075 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" event={"ID":"c6ab690f-7c72-4a14-ab7c-90a0d63699a6","Type":"ContainerStarted","Data":"83a366c4d0cde31cce170a7b570925f2639e4c3e13781c9b9c345d6e4ab0fad9"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.188475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" event={"ID":"6bd6880d-6581-4cca-8eb8-9acb80689e9e","Type":"ContainerStarted","Data":"8fb7fcde3a15430e12de5dfac1fa41a0eb15965edf0046be810f72d5f54c0d70"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.192148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" event={"ID":"f06b685a-8035-4bac-88d3-d092b6df21e4","Type":"ContainerStarted","Data":"8d3f27f31b92d737d00444aa042f2c3afdf121151c79bd3e9d7139af365d6e99"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.194276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" event={"ID":"13bcd7a7-769a-4324-964a-874eb1fbbd1e","Type":"ContainerStarted","Data":"83a90a04721ce55fff20df110b2c91ef4a1dd919e78e42cc08ef5cba5fe9161f"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.202560 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9gqxt" event={"ID":"be5a36ff-f665-4468-b7ae-8a443f0164e8","Type":"ContainerStarted","Data":"7eea5691f7dee20ee949db2548d058c8dad7f4c206be383773752efeb8928b1f"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.208287 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv"] Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.222666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lddqg" event={"ID":"70bdd69e-31db-4a48-811b-0f665647441a","Type":"ContainerStarted","Data":"43a8ef283d9d1d54e06ab5f1ba5c3b4896878210f99aa727a6da10a2b063b038"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.230374 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.230777 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.730758982 +0000 UTC m=+144.193301915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.231087 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" event={"ID":"0a656e98-1ed0-4b1a-8352-4038844a558a","Type":"ContainerStarted","Data":"76012ceee65b16bc97135746795a4d420efbeb9bbf6b74154f43b93518166985"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.234331 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vd4d" event={"ID":"2bd6e097-af15-41a1-9ab2-a4e79adef815","Type":"ContainerStarted","Data":"24fbfe2fbed34ea78798d7354d51dfd8fd3503ba9e7e208c46023f4f485f8c7d"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.239958 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" event={"ID":"2d8ed842-012e-42f9-b38e-c040f2e36ad6","Type":"ContainerStarted","Data":"2a040431ce52b2c54b484c4ed07048ade6620a6da8b0f43238a96edae99dc51e"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.244874 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" event={"ID":"5a20718d-a359-4670-86a3-4f32a2b11f53","Type":"ContainerStarted","Data":"8ee6c6bb357b9d8df2fd0ead01bd40604cac60edacf52b227c8bd516a63f9ac1"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.246767 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cnjvr" event={"ID":"06c96982-0b5d-4214-9d42-1b06ff771366","Type":"ContainerStarted","Data":"60bec95486292157be49f19a9efddc0aac6d0ea38b718715ba8b61c7ebbd1568"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.252647 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" event={"ID":"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c","Type":"ContainerStarted","Data":"d560857ee00dc6b566f20999759374585f09964def186161dee1a7f85fabbe6c"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.265691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" event={"ID":"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708","Type":"ContainerStarted","Data":"cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.270564 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-52vxj" event={"ID":"ebff9743-c884-4057-9b26-505cb4b8dca7","Type":"ContainerStarted","Data":"92140c6ca964b5c3299082a20c4c0c767cd610643b57596e40a87e4271595a8a"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.271684 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" event={"ID":"04f33872-609b-4d47-ab31-3315051b1414","Type":"ContainerStarted","Data":"96b6dfacd7ba0bc9346f2f654bf0eb24440de823f33d4dd6eec5386c7c19d281"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.290580 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" event={"ID":"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0","Type":"ContainerStarted","Data":"64cdfe82cca2d09292766b24fee01301ae83ce5bfe66d3af20f5199ad5128bc1"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.296657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" event={"ID":"b6a41d3b-0671-4105-9f35-4d6c72074c5d","Type":"ContainerStarted","Data":"25d40db8826a3fe720df941b552a94e69a7b623b47e96fa44d32885debe2206f"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.298407 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" event={"ID":"98053767-f7fa-4a83-a094-a96482717baf","Type":"ContainerStarted","Data":"8d7dc6266f3cb3fa639006825394aeb0e5acaf56dc4cbbca15752ec55f1e2eb4"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.303014 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hfgqg"] Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.303077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" event={"ID":"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0","Type":"ContainerStarted","Data":"c3c06cbd35ebad9ea9d8f4651243081229f14975fb6ed2e1f1dad1139bbd9f49"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.304873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" event={"ID":"bb1a45ce-530f-4492-a7e2-9432e194001d","Type":"ContainerStarted","Data":"25994e10e1fbe3edfd2268ff927fb7d64946d9d88c2fb8505ce3e638f935c274"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.306424 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" event={"ID":"68d8bd87-80e1-4c90-8541-367d0a676f73","Type":"ContainerStarted","Data":"5e87dc40644497cac0015b6cf536a6002632f8e0c1b7e6f2b0bd9082d2149575"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.307696 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fhmft" event={"ID":"cb428fe7-0d8c-4f25-b377-880388daf6aa","Type":"ContainerStarted","Data":"2c5a3bf7f345229cfe92e2b6c40307ef332fe5da28e21d584837192c63b3f4ef"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.308658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" event={"ID":"59420df1-93c7-4908-aa3b-3f3c61efdb18","Type":"ContainerStarted","Data":"aa7c482097d02ac7311581a6acbc5b32b2ef1b173529c0242b37d77c8d383f78"} Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.310626 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kb26l"] Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.312964 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rm987"] Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.333339 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.333517 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.833489738 +0000 UTC m=+144.296032621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.333777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.335843 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.835828675 +0000 UTC m=+144.298371618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.336441 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zjcpp" Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.399383 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bxk4b"] Jan 27 18:45:01 crc kubenswrapper[4853]: W0127 18:45:01.427164 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a683af2_9c78_4c3b_993f_f4b54b815f32.slice/crio-b6def8c75be7a73595aa9488f9e49d09dd019cf35e2fc60c14e54ad2d7e1bb0f WatchSource:0}: Error finding container b6def8c75be7a73595aa9488f9e49d09dd019cf35e2fc60c14e54ad2d7e1bb0f: Status 404 returned error can't find the container with id b6def8c75be7a73595aa9488f9e49d09dd019cf35e2fc60c14e54ad2d7e1bb0f Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.438401 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.438760 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.938731756 +0000 UTC m=+144.401274689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.439088 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.445672 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:01.945655066 +0000 UTC m=+144.408198019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: W0127 18:45:01.454967 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24efc8ab_a03a_411f_8441_454cae46ede9.slice/crio-29ca65c3bff5e8d5e7d401c9feedaa7ba9bb075cfa11062a5ca422610c6054c8 WatchSource:0}: Error finding container 29ca65c3bff5e8d5e7d401c9feedaa7ba9bb075cfa11062a5ca422610c6054c8: Status 404 returned error can't find the container with id 29ca65c3bff5e8d5e7d401c9feedaa7ba9bb075cfa11062a5ca422610c6054c8 Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.464908 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znwgm" podStartSLOduration=118.464889592 podStartE2EDuration="1m58.464889592s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:01.462737789 +0000 UTC m=+143.925280672" watchObservedRunningTime="2026-01-27 18:45:01.464889592 +0000 UTC m=+143.927432475" Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.520975 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr"] Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.546356 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.546812 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.046796356 +0000 UTC m=+144.509339239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.648514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.649191 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.149170271 +0000 UTC m=+144.611713154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.665210 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8q4sj" podStartSLOduration=118.665184093 podStartE2EDuration="1m58.665184093s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:01.658990154 +0000 UTC m=+144.121533047" watchObservedRunningTime="2026-01-27 18:45:01.665184093 +0000 UTC m=+144.127726976" Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.665671 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hscjm" podStartSLOduration=118.665664617 podStartE2EDuration="1m58.665664617s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:01.642459507 +0000 UTC m=+144.105002390" watchObservedRunningTime="2026-01-27 18:45:01.665664617 +0000 UTC m=+144.128207500" Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.749916 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.750579 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.250554578 +0000 UTC m=+144.713097451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.750791 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.751323 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.25130814 +0000 UTC m=+144.713851023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.757734 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lr7dh" podStartSLOduration=118.757709935 podStartE2EDuration="1m58.757709935s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:01.706355622 +0000 UTC m=+144.168898505" watchObservedRunningTime="2026-01-27 18:45:01.757709935 +0000 UTC m=+144.220252828" Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.835908 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk"] Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.852951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.853313 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.353281634 +0000 UTC m=+144.815824557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.853484 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.853853 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.35384469 +0000 UTC m=+144.816387573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.877538 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n869z" podStartSLOduration=118.877519654 podStartE2EDuration="1m58.877519654s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:01.876226676 +0000 UTC m=+144.338769569" watchObservedRunningTime="2026-01-27 18:45:01.877519654 +0000 UTC m=+144.340062537" Jan 27 18:45:01 crc kubenswrapper[4853]: I0127 18:45:01.955542 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:01 crc kubenswrapper[4853]: E0127 18:45:01.955982 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.455968559 +0000 UTC m=+144.918511442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.001653 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r2vkh" podStartSLOduration=119.001633857 podStartE2EDuration="1m59.001633857s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:01.958014198 +0000 UTC m=+144.420557091" watchObservedRunningTime="2026-01-27 18:45:02.001633857 +0000 UTC m=+144.464176740" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.077630 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.078197 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.578183277 +0000 UTC m=+145.040726160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.179598 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.180032 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.680013667 +0000 UTC m=+145.142556550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.281850 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.282309 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.78229024 +0000 UTC m=+145.244833133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.338856 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" event={"ID":"6b7fddc4-f171-4c31-9bfb-9ffcce6ea5f0","Type":"ContainerStarted","Data":"571f0ff37dbba12d66598085d01e463a203ad1119e9dc02c2f1c80fa189bfa52"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.354264 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xnnk5" podStartSLOduration=119.354232807 podStartE2EDuration="1m59.354232807s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.353808465 +0000 UTC m=+144.816351348" watchObservedRunningTime="2026-01-27 18:45:02.354232807 +0000 UTC m=+144.816775700" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.361407 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" event={"ID":"93b9b4bd-fa71-40e1-a4f6-16099dd2c84c","Type":"ContainerStarted","Data":"098dbf6924d1c031b24fe6e421daa5cf87403ba0b938a2cdb2b47629212a2cc5"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.362960 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.372395 4853 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rrwhm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.372460 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" podUID="93b9b4bd-fa71-40e1-a4f6-16099dd2c84c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.388420 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.388777 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.888761064 +0000 UTC m=+145.351303937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.394195 4853 generic.go:334] "Generic (PLEG): container finished" podID="f06b685a-8035-4bac-88d3-d092b6df21e4" containerID="27e814042106e330de3b76a5c78402dcdbad39ba2832d05767bd2c4a463c39d3" exitCode=0 Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.395030 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" event={"ID":"f06b685a-8035-4bac-88d3-d092b6df21e4","Type":"ContainerDied","Data":"27e814042106e330de3b76a5c78402dcdbad39ba2832d05767bd2c4a463c39d3"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.436461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" event={"ID":"13bcd7a7-769a-4324-964a-874eb1fbbd1e","Type":"ContainerStarted","Data":"086858db67efac87e7b18e467c6e8596612e112fd49cee592ba283c736f4343e"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.437103 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.439660 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" event={"ID":"04f33872-609b-4d47-ab31-3315051b1414","Type":"ContainerStarted","Data":"c271eb133f6db0ce35e57430058e1ad8a283087c70d7a45d2391a052668bd9f1"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.452475 4853 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d6bzk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.452542 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" podUID="13bcd7a7-769a-4324-964a-874eb1fbbd1e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.468104 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" podStartSLOduration=119.468083274 podStartE2EDuration="1m59.468083274s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.414870148 +0000 UTC m=+144.877413041" watchObservedRunningTime="2026-01-27 18:45:02.468083274 +0000 UTC m=+144.930626157" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.473560 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" event={"ID":"c6ab690f-7c72-4a14-ab7c-90a0d63699a6","Type":"ContainerStarted","Data":"c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.473808 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" podUID="c6ab690f-7c72-4a14-ab7c-90a0d63699a6" containerName="collect-profiles" containerID="cri-o://c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8" gracePeriod=30 Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.490710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.492580 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:02.992562431 +0000 UTC m=+145.455105314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.527476 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" podStartSLOduration=119.527452658 podStartE2EDuration="1m59.527452658s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.519690254 +0000 UTC m=+144.982233137" watchObservedRunningTime="2026-01-27 18:45:02.527452658 +0000 UTC m=+144.989995541" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.528473 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" event={"ID":"2f81e143-f570-4ea2-837d-f9a1dc205d9c","Type":"ContainerStarted","Data":"241a35887a96c06706d075781a2a88e0a7d7b9c33e6a426652e75966fa72e616"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.565710 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" event={"ID":"24efc8ab-a03a-411f-8441-454cae46ede9","Type":"ContainerStarted","Data":"29ca65c3bff5e8d5e7d401c9feedaa7ba9bb075cfa11062a5ca422610c6054c8"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.590666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" event={"ID":"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd","Type":"ContainerStarted","Data":"fdc4553df608ef8d62074a72b8da59bfd9e7f8d629aee33fa7d7c25cb7a5f04b"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.592469 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.593403 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.093386492 +0000 UTC m=+145.555929375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.598826 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" podStartSLOduration=119.598807308 podStartE2EDuration="1m59.598807308s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.597915283 +0000 UTC m=+145.060458176" watchObservedRunningTime="2026-01-27 18:45:02.598807308 +0000 UTC m=+145.061350201" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.604874 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mlfr4" podStartSLOduration=119.604856733 podStartE2EDuration="1m59.604856733s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.560542334 +0000 UTC m=+145.023085227" watchObservedRunningTime="2026-01-27 18:45:02.604856733 +0000 UTC m=+145.067399616" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.610994 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" event={"ID":"41434dfd-3fc3-4184-a911-506620889ebe","Type":"ContainerStarted","Data":"70a5250002bbbf5d31662f298d2c7365b6767f01fa860c62f6e69a519d5f1b5c"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.621332 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lddqg" event={"ID":"70bdd69e-31db-4a48-811b-0f665647441a","Type":"ContainerStarted","Data":"008d1fb2c4d9c91a83ebdb175a6031f0e57e91f7b8c986c243c6195e088cf761"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.645581 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lddqg" podStartSLOduration=6.645561638 podStartE2EDuration="6.645561638s" podCreationTimestamp="2026-01-27 18:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.640802441 +0000 UTC m=+145.103345334" watchObservedRunningTime="2026-01-27 18:45:02.645561638 +0000 UTC m=+145.108104521" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.647202 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zpn7p" podStartSLOduration=119.647191715 podStartE2EDuration="1m59.647191715s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.619788234 +0000 UTC m=+145.082331117" watchObservedRunningTime="2026-01-27 18:45:02.647191715 +0000 UTC m=+145.109734598" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.657737 4853 generic.go:334] "Generic (PLEG): container finished" podID="6bd6880d-6581-4cca-8eb8-9acb80689e9e" containerID="94eb47ab8380e42a3360feb7f8617b63edaa6f5ad6f28c4655b6c977351330e7" exitCode=0 Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.657820 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" event={"ID":"6bd6880d-6581-4cca-8eb8-9acb80689e9e","Type":"ContainerDied","Data":"94eb47ab8380e42a3360feb7f8617b63edaa6f5ad6f28c4655b6c977351330e7"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.672566 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" event={"ID":"06a480d9-6aba-4daa-8eb3-7d5e93beeef0","Type":"ContainerStarted","Data":"46da0b318df5c076eaab30de266a33804a05dcba53336dcd390032f361ef90e0"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.680945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" event={"ID":"bb1a45ce-530f-4492-a7e2-9432e194001d","Type":"ContainerStarted","Data":"d92883203781288a7e362decf695c3fbe3135abb7ea430bb41ede69a4dada9bf"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.697016 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.698721 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.198704173 +0000 UTC m=+145.661247136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.705359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" event={"ID":"93cfa4e6-9e7c-4c17-a30f-e8d15f452be7","Type":"ContainerStarted","Data":"d0cef16be241d40a1d79594927089ad0716d9da0a04f35d1099cfc3fbdf4b5bf"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.717018 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.732316 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" event={"ID":"80fdceac-6136-4c48-a96f-3243f5416b10","Type":"ContainerStarted","Data":"adb2eebd6791c427928a46ff5e98e4e28e634dc83c340b3b235a60bd21319317"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.739973 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" event={"ID":"98053767-f7fa-4a83-a094-a96482717baf","Type":"ContainerStarted","Data":"1cb2b78cbdf3d070894933e053666b90eb205a10909c0720d61175f64fd5ee25"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.769149 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4kgbm" podStartSLOduration=118.769130546 podStartE2EDuration="1m58.769130546s" podCreationTimestamp="2026-01-27 18:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.767395786 +0000 UTC m=+145.229938679" watchObservedRunningTime="2026-01-27 18:45:02.769130546 +0000 UTC m=+145.231673429" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.770069 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" podStartSLOduration=119.770062653 podStartE2EDuration="1m59.770062653s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.731597052 +0000 UTC m=+145.194139935" watchObservedRunningTime="2026-01-27 18:45:02.770062653 +0000 UTC m=+145.232605526" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.798468 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" event={"ID":"0a683af2-9c78-4c3b-993f-f4b54b815f32","Type":"ContainerStarted","Data":"b6def8c75be7a73595aa9488f9e49d09dd019cf35e2fc60c14e54ad2d7e1bb0f"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.814501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" event={"ID":"86af8168-4922-4d5d-adee-38d4d88d55ca","Type":"ContainerStarted","Data":"98cb5a80a2f9c010b9faaffc231bb8e25683f3f8342523a860b7c4ec5520fc9a"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.815538 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.819555 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.819899 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.319867291 +0000 UTC m=+145.782410184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.819975 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.820857 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.320844149 +0000 UTC m=+145.783387032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.842459 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bxk4b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.842519 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.882953 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" podStartSLOduration=119.882931542 podStartE2EDuration="1m59.882931542s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.858827626 +0000 UTC m=+145.321370509" watchObservedRunningTime="2026-01-27 18:45:02.882931542 +0000 UTC m=+145.345474425" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.896282 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vd4d" event={"ID":"2bd6e097-af15-41a1-9ab2-a4e79adef815","Type":"ContainerStarted","Data":"2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.909339 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" event={"ID":"39a64c5e-945d-4be5-a1af-6c8ee6fa8ee0","Type":"ContainerStarted","Data":"73471c12213a490ff173153d3f075e7b43f1019ffcd713c79c8aed0a28307642"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.921620 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.921820 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.421789493 +0000 UTC m=+145.884332386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.921971 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:02 crc kubenswrapper[4853]: E0127 18:45:02.923908 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.423894084 +0000 UTC m=+145.886436977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.932666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9gqxt" event={"ID":"be5a36ff-f665-4468-b7ae-8a443f0164e8","Type":"ContainerStarted","Data":"3a1eea31544ed38d04aadd7ae4e33fabe20fb64d86520a64c7de7ed16c1ccac8"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.933600 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.934702 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9vd4d" podStartSLOduration=119.934687746 podStartE2EDuration="1m59.934687746s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.922749471 +0000 UTC m=+145.385292374" watchObservedRunningTime="2026-01-27 18:45:02.934687746 +0000 UTC m=+145.397230629" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.951029 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.951108 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.979254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" event={"ID":"68d8bd87-80e1-4c90-8541-367d0a676f73","Type":"ContainerStarted","Data":"6fc934f0d3057d8fc6edecf3261a470fc32a6bb86f22dd66c1064d0398e9da65"} Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.991266 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9gqxt" podStartSLOduration=119.991245769 podStartE2EDuration="1m59.991245769s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.990049424 +0000 UTC m=+145.452592297" watchObservedRunningTime="2026-01-27 18:45:02.991245769 +0000 UTC m=+145.453788652" Jan 27 18:45:02 crc kubenswrapper[4853]: I0127 18:45:02.991376 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nnmnh" podStartSLOduration=119.991369912 podStartE2EDuration="1m59.991369912s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:02.946478196 +0000 UTC m=+145.409021079" watchObservedRunningTime="2026-01-27 18:45:02.991369912 +0000 UTC m=+145.453912795" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.003818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" event={"ID":"6e624c88-c6c1-4c35-985b-264173a9abcd","Type":"ContainerStarted","Data":"c1177332ed80c9af8e63362afbda95c6d2d6b0477c73b01728b0976da37e4c0a"} Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.027702 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.027942 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.527901507 +0000 UTC m=+145.990444390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.039453 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.039829 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.539812161 +0000 UTC m=+146.002355044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.072970 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" podStartSLOduration=120.072947498 podStartE2EDuration="2m0.072947498s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:03.061721974 +0000 UTC m=+145.524264857" watchObservedRunningTime="2026-01-27 18:45:03.072947498 +0000 UTC m=+145.535490391" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.118390 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podStartSLOduration=120.118365549 podStartE2EDuration="2m0.118365549s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:03.115435964 +0000 UTC m=+145.577978847" watchObservedRunningTime="2026-01-27 18:45:03.118365549 +0000 UTC m=+145.580908432" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.142364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.144451 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.644429452 +0000 UTC m=+146.106972335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.194002 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29492310-22ft5_c6ab690f-7c72-4a14-ab7c-90a0d63699a6/collect-profiles/0.log" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.194072 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.245429 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.245848 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.745832599 +0000 UTC m=+146.208375482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.346901 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-secret-volume\") pod \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.347315 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5hg7\" (UniqueName: \"kubernetes.io/projected/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-kube-api-access-c5hg7\") pod \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.347429 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-config-volume\") pod \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\" (UID: \"c6ab690f-7c72-4a14-ab7c-90a0d63699a6\") " Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.347554 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.347923 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.847904676 +0000 UTC m=+146.310447559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.349562 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "c6ab690f-7c72-4a14-ab7c-90a0d63699a6" (UID: "c6ab690f-7c72-4a14-ab7c-90a0d63699a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.358354 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-kube-api-access-c5hg7" (OuterVolumeSpecName: "kube-api-access-c5hg7") pod "c6ab690f-7c72-4a14-ab7c-90a0d63699a6" (UID: "c6ab690f-7c72-4a14-ab7c-90a0d63699a6"). InnerVolumeSpecName "kube-api-access-c5hg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.360222 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c6ab690f-7c72-4a14-ab7c-90a0d63699a6" (UID: "c6ab690f-7c72-4a14-ab7c-90a0d63699a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.450834 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.450985 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.451001 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.451014 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5hg7\" (UniqueName: \"kubernetes.io/projected/c6ab690f-7c72-4a14-ab7c-90a0d63699a6-kube-api-access-c5hg7\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.451301 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:03.951284451 +0000 UTC m=+146.413827334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.552318 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.552513 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.052490123 +0000 UTC m=+146.515032996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.552570 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.552898 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.052885404 +0000 UTC m=+146.515428287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.657790 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.657981 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.157949028 +0000 UTC m=+146.620491911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.658590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.658898 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.158879564 +0000 UTC m=+146.621422447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.759936 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.760079 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.260057786 +0000 UTC m=+146.722600669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.760266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.760587 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.260577551 +0000 UTC m=+146.723120434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.861876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.862236 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.362204665 +0000 UTC m=+146.824747548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.862514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.862901 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.362889655 +0000 UTC m=+146.825432588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.963956 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.964200 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.464173519 +0000 UTC m=+146.926716402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:03 crc kubenswrapper[4853]: I0127 18:45:03.964526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:03 crc kubenswrapper[4853]: E0127 18:45:03.964939 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.46492363 +0000 UTC m=+146.927466513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.018802 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" event={"ID":"5244d6c6-721d-44cf-8175-48408b3780b0","Type":"ContainerStarted","Data":"6e2e85b08b273dde6f44b2e2dab7231be50b72b3d48e41a7a0080b03c546e70e"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.021090 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" event={"ID":"f06b685a-8035-4bac-88d3-d092b6df21e4","Type":"ContainerStarted","Data":"21f2af0312c167b62dea6a2b37ea3d5c0045f0e9b0d8db5d6a22d6e2589a47aa"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.023176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" event={"ID":"0a683af2-9c78-4c3b-993f-f4b54b815f32","Type":"ContainerStarted","Data":"ea1a26db053f9f226ff03dfa128cae849e4201c0e1c02cf73b84f43ac4d20f89"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.023207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" event={"ID":"0a683af2-9c78-4c3b-993f-f4b54b815f32","Type":"ContainerStarted","Data":"ce5c53a361e35455f5b97c2958cc94faaa7d5e254258fb8354602f6d60495fb1"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.023709 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.025305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cnjvr" event={"ID":"06c96982-0b5d-4214-9d42-1b06ff771366","Type":"ContainerStarted","Data":"60b96ffba3aae25d90c4b458d95bbc07ca056d29f6de8e75bf6a1fab5a6641e4"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.027253 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" event={"ID":"b0123ef8-ad21-45f5-b5d8-e491c9aa10dd","Type":"ContainerStarted","Data":"1f39e8b389e1c5234ec7fb0d1c129fe459eb55e249e2c811238bdb972527b060"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.027944 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.029631 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-st9cr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.029676 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" podUID="b0123ef8-ad21-45f5-b5d8-e491c9aa10dd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.030907 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" event={"ID":"86af8168-4922-4d5d-adee-38d4d88d55ca","Type":"ContainerStarted","Data":"3295b1a762b2d1b48fb2503245be19cad6111ea21dd6d3ca23f068622688bff8"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.032131 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bxk4b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.032172 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.034014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" event={"ID":"59420df1-93c7-4908-aa3b-3f3c61efdb18","Type":"ContainerStarted","Data":"b6e642206b2c14919a77d9795a51ebde1705c19a382cd98df473a7de88103258"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.034050 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" event={"ID":"59420df1-93c7-4908-aa3b-3f3c61efdb18","Type":"ContainerStarted","Data":"12535ac3e951f5e3d92bf5d4b3725ff3db037bdfdbaa621d05d38165fca18ef2"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.035745 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" event={"ID":"68d8bd87-80e1-4c90-8541-367d0a676f73","Type":"ContainerStarted","Data":"c066adf4e310fd759db98de5da7dcdcab10989de938e53edd6f8aa0e97f7cd8f"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.037534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" event={"ID":"b6a41d3b-0671-4105-9f35-4d6c72074c5d","Type":"ContainerStarted","Data":"a7368fa54df26d43a09d80d5427cd5d103406ef87195bb9c371f1831b794b398"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.039527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" event={"ID":"06a480d9-6aba-4daa-8eb3-7d5e93beeef0","Type":"ContainerStarted","Data":"df8b8cc978fcbcb25e4b8ee0618f4e184bd5f57514cfb8c7efa55a2e654af413"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.047564 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" event={"ID":"bb1a45ce-530f-4492-a7e2-9432e194001d","Type":"ContainerStarted","Data":"15e6cd6d8a80db868ed46b212fdb79f6b3b8b5142b701a9fc181ebfe75352b5b"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.049780 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fhmft" event={"ID":"cb428fe7-0d8c-4f25-b377-880388daf6aa","Type":"ContainerStarted","Data":"98c8b58dc988248e53a042bbd995df4113dfa9e2fb9fb51fd3b30859bf0a41bd"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.051661 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-52vxj" event={"ID":"ebff9743-c884-4057-9b26-505cb4b8dca7","Type":"ContainerStarted","Data":"6d7efa948fd0ffd9dc765331e2e7cdbc0d0f34678074fb6af870b0243a6c0f7f"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.051702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-52vxj" event={"ID":"ebff9743-c884-4057-9b26-505cb4b8dca7","Type":"ContainerStarted","Data":"395b3ceca007e7691f979713ed45e62aeca7a018b0ca05988aba7f5433e997b3"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.051764 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-52vxj" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.053989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" event={"ID":"24efc8ab-a03a-411f-8441-454cae46ede9","Type":"ContainerStarted","Data":"03ed7e11da29a8342023c91aae8b6ea190d23f2568b577e605cc484e61106423"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.054027 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" event={"ID":"24efc8ab-a03a-411f-8441-454cae46ede9","Type":"ContainerStarted","Data":"bc502160fd0e28a7826c2faf7fe1bf29dc79bab68d3cc36115d65082e26175ef"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.056811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" event={"ID":"6bd6880d-6581-4cca-8eb8-9acb80689e9e","Type":"ContainerStarted","Data":"4af68c15b789d393bbe23359e662043d316ddd6553985f181c1888d3cf10743f"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.056852 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" event={"ID":"6bd6880d-6581-4cca-8eb8-9acb80689e9e","Type":"ContainerStarted","Data":"7212e30158dcd90c2264ee69027f0f0b8e68d0bf3ae9e99270d5105fb0ce5c0f"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.057816 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" event={"ID":"5a20718d-a359-4670-86a3-4f32a2b11f53","Type":"ContainerStarted","Data":"c3c52cc772298af9786aa4ac3e812246f9a7efed5b5a43f16d6b18b30bd47a5b"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.058843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" event={"ID":"6e624c88-c6c1-4c35-985b-264173a9abcd","Type":"ContainerStarted","Data":"0d46340db223ae175b54107a3527edd22d50e0436dfc42103550e1ed0c551499"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.059648 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.060951 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" event={"ID":"80fdceac-6136-4c48-a96f-3243f5416b10","Type":"ContainerStarted","Data":"54101f6afb66637a3b16317ee9df741c9fc3e629e517b8753d2000b8ec823344"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.061456 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kb26l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.061489 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.063018 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" event={"ID":"4434ecce-14db-446c-900b-3ebf84bbe25c","Type":"ContainerStarted","Data":"b225ccf2943b24829c394eee39a78849d79c23b5aedb8eb86fc0f4e9e9505407"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.063052 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" event={"ID":"4434ecce-14db-446c-900b-3ebf84bbe25c","Type":"ContainerStarted","Data":"f0750afb8d6c5dac9609f113cf167ba1046befd42bfde55668be6b2f4a55fccc"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.064591 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" event={"ID":"41434dfd-3fc3-4184-a911-506620889ebe","Type":"ContainerStarted","Data":"08c14654c73a75d12afb5bd475fed280c17c7f14c57b1dc11239ecb9bd29aad6"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.065150 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.065602 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.565586917 +0000 UTC m=+147.028129800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.066910 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29492310-22ft5_c6ab690f-7c72-4a14-ab7c-90a0d63699a6/collect-profiles/0.log" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.066948 4853 generic.go:334] "Generic (PLEG): container finished" podID="c6ab690f-7c72-4a14-ab7c-90a0d63699a6" containerID="c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8" exitCode=2 Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.067002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" event={"ID":"c6ab690f-7c72-4a14-ab7c-90a0d63699a6","Type":"ContainerDied","Data":"c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.067023 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" event={"ID":"c6ab690f-7c72-4a14-ab7c-90a0d63699a6","Type":"ContainerDied","Data":"83a366c4d0cde31cce170a7b570925f2639e4c3e13781c9b9c345d6e4ab0fad9"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.067043 4853 scope.go:117] "RemoveContainer" containerID="c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.067169 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.081043 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ww5mj" podStartSLOduration=121.081028222 podStartE2EDuration="2m1.081028222s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.080517558 +0000 UTC m=+146.543060441" watchObservedRunningTime="2026-01-27 18:45:04.081028222 +0000 UTC m=+146.543571105" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.081922 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.082248 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.083606 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t4wdl" podStartSLOduration=121.083599827 podStartE2EDuration="2m1.083599827s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.045815176 +0000 UTC m=+146.508358059" watchObservedRunningTime="2026-01-27 18:45:04.083599827 +0000 UTC m=+146.546142710" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.085305 4853 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-74rmt container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.085355 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" podUID="f06b685a-8035-4bac-88d3-d092b6df21e4" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.089180 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" event={"ID":"2d8ed842-012e-42f9-b38e-c040f2e36ad6","Type":"ContainerStarted","Data":"804e6001acd7bf94ea30b7ab555495d3fa5afa5156ecbe88e9c8d7b58fe42942"} Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.093852 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.094283 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.108732 4853 scope.go:117] "RemoveContainer" containerID="c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.108741 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rrwhm" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.109834 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8\": container with ID starting with c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8 not found: ID does not exist" containerID="c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.109864 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8"} err="failed to get container status \"c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8\": rpc error: code = NotFound desc = could not find container \"c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8\": container with ID starting with c39bd4e052f569c984a2afdc0d3df1dcd26b7cf317f64858404761e3ac4e8cf8 not found: ID does not exist" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.110850 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wnqmk" podStartSLOduration=121.110835933 podStartE2EDuration="2m1.110835933s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.108356582 +0000 UTC m=+146.570899465" watchObservedRunningTime="2026-01-27 18:45:04.110835933 +0000 UTC m=+146.573378816" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.119204 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d6bzk" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.119244 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4mwhw" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.148431 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" podStartSLOduration=121.148416188 podStartE2EDuration="2m1.148416188s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.147376418 +0000 UTC m=+146.609919311" watchObservedRunningTime="2026-01-27 18:45:04.148416188 +0000 UTC m=+146.610959071" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.166999 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.171373 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.67135806 +0000 UTC m=+147.133901013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.182871 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.183111 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.201271 4853 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tqzdv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.201335 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" podUID="6bd6880d-6581-4cca-8eb8-9acb80689e9e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.210903 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cnjvr" podStartSLOduration=8.210888242 podStartE2EDuration="8.210888242s" podCreationTimestamp="2026-01-27 18:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.209603325 +0000 UTC m=+146.672146208" watchObservedRunningTime="2026-01-27 18:45:04.210888242 +0000 UTC m=+146.673431125" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.211428 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gp2qn" podStartSLOduration=121.211422407 podStartE2EDuration="2m1.211422407s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.182730079 +0000 UTC m=+146.645272962" watchObservedRunningTime="2026-01-27 18:45:04.211422407 +0000 UTC m=+146.673965290" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.243453 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" podStartSLOduration=121.243428111 podStartE2EDuration="2m1.243428111s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.24060211 +0000 UTC m=+146.703145003" watchObservedRunningTime="2026-01-27 18:45:04.243428111 +0000 UTC m=+146.705970994" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.277668 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.278226 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.778198045 +0000 UTC m=+147.240740928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.288178 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.289931 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hfgqg" podStartSLOduration=120.289918974 podStartE2EDuration="2m0.289918974s" podCreationTimestamp="2026-01-27 18:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.289508952 +0000 UTC m=+146.752051835" watchObservedRunningTime="2026-01-27 18:45:04.289918974 +0000 UTC m=+146.752461867" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.291363 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.791347365 +0000 UTC m=+147.253890248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.326085 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" podStartSLOduration=121.326059007 podStartE2EDuration="2m1.326059007s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.31196052 +0000 UTC m=+146.774503423" watchObservedRunningTime="2026-01-27 18:45:04.326059007 +0000 UTC m=+146.788601890" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.344543 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-52vxj" podStartSLOduration=8.34452372 podStartE2EDuration="8.34452372s" podCreationTimestamp="2026-01-27 18:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.342559793 +0000 UTC m=+146.805102676" watchObservedRunningTime="2026-01-27 18:45:04.34452372 +0000 UTC m=+146.807066603" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.365671 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rm987" podStartSLOduration=121.36564906 podStartE2EDuration="2m1.36564906s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.363680603 +0000 UTC m=+146.826223486" watchObservedRunningTime="2026-01-27 18:45:04.36564906 +0000 UTC m=+146.828191943" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.390340 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.390610 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.89059408 +0000 UTC m=+147.353136963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.391004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.391598 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.891587589 +0000 UTC m=+147.354130472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.444169 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5"] Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.456181 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-22ft5"] Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.467559 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pp2rw" podStartSLOduration=121.467539542 podStartE2EDuration="2m1.467539542s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.467055358 +0000 UTC m=+146.929598241" watchObservedRunningTime="2026-01-27 18:45:04.467539542 +0000 UTC m=+146.930082425" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.495439 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.496281 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:04.996261251 +0000 UTC m=+147.458804124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.563865 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" podStartSLOduration=4.563844402 podStartE2EDuration="4.563844402s" podCreationTimestamp="2026-01-27 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.514428385 +0000 UTC m=+146.976971278" watchObservedRunningTime="2026-01-27 18:45:04.563844402 +0000 UTC m=+147.026387285" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.597622 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.598289 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.098274046 +0000 UTC m=+147.560816939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.601087 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kmkjx" podStartSLOduration=121.601069607 podStartE2EDuration="2m1.601069607s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.599513022 +0000 UTC m=+147.062055935" watchObservedRunningTime="2026-01-27 18:45:04.601069607 +0000 UTC m=+147.063612490" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.699221 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.699383 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.199361795 +0000 UTC m=+147.661904678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.700806 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.701168 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.201153386 +0000 UTC m=+147.663696269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.717415 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.723198 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:04 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:04 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:04 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.723264 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.737817 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fhmft" podStartSLOduration=121.737798785 podStartE2EDuration="2m1.737798785s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.694328349 +0000 UTC m=+147.156871232" watchObservedRunningTime="2026-01-27 18:45:04.737798785 +0000 UTC m=+147.200341678" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.738538 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qpjt6" podStartSLOduration=121.738529556 podStartE2EDuration="2m1.738529556s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.73592597 +0000 UTC m=+147.198468853" watchObservedRunningTime="2026-01-27 18:45:04.738529556 +0000 UTC m=+147.201072439" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.805336 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.805714 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.305695945 +0000 UTC m=+147.768238828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.826783 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" podStartSLOduration=121.826758673 podStartE2EDuration="2m1.826758673s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.804612623 +0000 UTC m=+147.267155506" watchObservedRunningTime="2026-01-27 18:45:04.826758673 +0000 UTC m=+147.289301556" Jan 27 18:45:04 crc kubenswrapper[4853]: I0127 18:45:04.906396 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:04 crc kubenswrapper[4853]: E0127 18:45:04.906847 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.406833355 +0000 UTC m=+147.869376238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.008185 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.008548 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.508520691 +0000 UTC m=+147.971063614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.094599 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bxk4b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.094663 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kb26l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.094890 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.094932 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.094813 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.094978 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.115459 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.116541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.116796 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.118293 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.618278649 +0000 UTC m=+148.080821532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.128231 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.131540 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.218851 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.219057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.219234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.220177 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.72015323 +0000 UTC m=+148.182696113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.223534 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.240819 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.320777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.321273 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.821252189 +0000 UTC m=+148.283795082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.337664 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.353802 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.362166 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.421556 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.421786 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.921764181 +0000 UTC m=+148.384307064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.422081 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.422391 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:05.922383448 +0000 UTC m=+148.384926331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.523388 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.523666 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.023649942 +0000 UTC m=+148.486192825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.538785 4853 csr.go:261] certificate signing request csr-g9p95 is approved, waiting to be issued Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.541423 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.541481 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.567632 4853 csr.go:257] certificate signing request csr-g9p95 is issued Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.626995 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.627314 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.127301835 +0000 UTC m=+148.589844718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.728232 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.728774 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.228757364 +0000 UTC m=+148.691300247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.741350 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:05 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:05 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:05 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.741400 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.829692 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.830023 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.330008287 +0000 UTC m=+148.792551180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.857885 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbcwm" podStartSLOduration=122.857864731 podStartE2EDuration="2m2.857864731s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:04.827268858 +0000 UTC m=+147.289811761" watchObservedRunningTime="2026-01-27 18:45:05.857864731 +0000 UTC m=+148.320407614" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.858993 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p74jj"] Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.859247 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ab690f-7c72-4a14-ab7c-90a0d63699a6" containerName="collect-profiles" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.859268 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ab690f-7c72-4a14-ab7c-90a0d63699a6" containerName="collect-profiles" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.859584 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ab690f-7c72-4a14-ab7c-90a0d63699a6" containerName="collect-profiles" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.860490 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.879766 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.883326 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p74jj"] Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.934959 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-st9cr" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.935688 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.935824 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-utilities\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.935883 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwnw\" (UniqueName: \"kubernetes.io/projected/81272aef-67fa-4c09-bf30-56fdfec7dd7b-kube-api-access-2fwnw\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:05 crc kubenswrapper[4853]: I0127 18:45:05.935960 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-catalog-content\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:05 crc kubenswrapper[4853]: E0127 18:45:05.936061 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.436046139 +0000 UTC m=+148.898589022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.043424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-catalog-content\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.043468 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-utilities\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.043498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.043520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwnw\" (UniqueName: \"kubernetes.io/projected/81272aef-67fa-4c09-bf30-56fdfec7dd7b-kube-api-access-2fwnw\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.044493 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-catalog-content\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.044745 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-utilities\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.044974 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.544964373 +0000 UTC m=+149.007507256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.098073 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ck67k"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.103929 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.114026 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.131091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwnw\" (UniqueName: \"kubernetes.io/projected/81272aef-67fa-4c09-bf30-56fdfec7dd7b-kube-api-access-2fwnw\") pod \"certified-operators-p74jj\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.151966 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.152170 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-catalog-content\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.152230 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgmb\" (UniqueName: \"kubernetes.io/projected/bdf3b7ad-3545-4192-941a-862154002694-kube-api-access-bpgmb\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.152293 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-utilities\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.152422 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.652405185 +0000 UTC m=+149.114948068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.165567 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ab690f-7c72-4a14-ab7c-90a0d63699a6" path="/var/lib/kubelet/pods/c6ab690f-7c72-4a14-ab7c-90a0d63699a6/volumes" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.166227 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" event={"ID":"5a20718d-a359-4670-86a3-4f32a2b11f53","Type":"ContainerStarted","Data":"37305359c61d74704ef5b2955520576990923d61dbe6fef59815410e5a2db609"} Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.166252 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" event={"ID":"5a20718d-a359-4670-86a3-4f32a2b11f53","Type":"ContainerStarted","Data":"631495907e135cd2a29588623a4a348515cfe94d98a608dcff9d3e8ce58916df"} Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.168725 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ck67k"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.181680 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.211112 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.256774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-utilities\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.256994 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-catalog-content\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.257060 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.257114 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgmb\" (UniqueName: \"kubernetes.io/projected/bdf3b7ad-3545-4192-941a-862154002694-kube-api-access-bpgmb\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.259811 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-utilities\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.260071 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-catalog-content\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.260751 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.760735893 +0000 UTC m=+149.223278766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: W0127 18:45:06.297454 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5ccd2d3214a91238f47be50b4620c688de60fa87ed12b64ef26217047f37f991 WatchSource:0}: Error finding container 5ccd2d3214a91238f47be50b4620c688de60fa87ed12b64ef26217047f37f991: Status 404 returned error can't find the container with id 5ccd2d3214a91238f47be50b4620c688de60fa87ed12b64ef26217047f37f991 Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.321212 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pflg"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.322089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.343368 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgmb\" (UniqueName: \"kubernetes.io/projected/bdf3b7ad-3545-4192-941a-862154002694-kube-api-access-bpgmb\") pod \"community-operators-ck67k\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.352449 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pflg"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.364643 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.364835 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.864807498 +0000 UTC m=+149.327350381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.364898 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-catalog-content\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.365196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95z5d\" (UniqueName: \"kubernetes.io/projected/a442dc5b-e830-490b-8ad1-6a6606fea52b-kube-api-access-95z5d\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.365254 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-utilities\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.365320 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.365632 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.865616531 +0000 UTC m=+149.328159414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.448983 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.466299 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.466336 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8pfch"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.466586 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-utilities\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.466644 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-catalog-content\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.466674 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95z5d\" (UniqueName: \"kubernetes.io/projected/a442dc5b-e830-490b-8ad1-6a6606fea52b-kube-api-access-95z5d\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.466997 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:06.966983558 +0000 UTC m=+149.429526441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.467347 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-utilities\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.467677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-catalog-content\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.474064 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.478549 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pfch"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.501252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95z5d\" (UniqueName: \"kubernetes.io/projected/a442dc5b-e830-490b-8ad1-6a6606fea52b-kube-api-access-95z5d\") pod \"certified-operators-6pflg\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.545887 4853 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.567932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6mn\" (UniqueName: \"kubernetes.io/projected/8bb82662-0739-432b-93b0-c5f1bc3ed268-kube-api-access-ss6mn\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.567985 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-utilities\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.568065 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.568110 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-catalog-content\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.569267 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.06923258 +0000 UTC m=+149.531775463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.575720 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 18:40:05 +0000 UTC, rotation deadline is 2026-12-19 08:59:26.798929723 +0000 UTC Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.575756 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7814h14m20.223177315s for next certificate rotation Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.671697 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.171677037 +0000 UTC m=+149.634219920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.672219 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.672436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-utilities\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.672490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.672512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-catalog-content\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.672598 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss6mn\" (UniqueName: \"kubernetes.io/projected/8bb82662-0739-432b-93b0-c5f1bc3ed268-kube-api-access-ss6mn\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.673830 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-utilities\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.673975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-catalog-content\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.674318 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.174301923 +0000 UTC m=+149.636844886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.705684 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss6mn\" (UniqueName: \"kubernetes.io/projected/8bb82662-0739-432b-93b0-c5f1bc3ed268-kube-api-access-ss6mn\") pod \"community-operators-8pfch\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.708991 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.721497 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:06 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:06 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:06 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.721560 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.756604 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p74jj"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.775173 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.775571 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.275542606 +0000 UTC m=+149.738085489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.832554 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.878817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.879147 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.379135657 +0000 UTC m=+149.841678540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.884378 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ck67k"] Jan 27 18:45:06 crc kubenswrapper[4853]: I0127 18:45:06.980903 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:06 crc kubenswrapper[4853]: E0127 18:45:06.981369 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.481351698 +0000 UTC m=+149.943894581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.082647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:07 crc kubenswrapper[4853]: E0127 18:45:07.082979 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.582964862 +0000 UTC m=+150.045507755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.169265 4853 generic.go:334] "Generic (PLEG): container finished" podID="41434dfd-3fc3-4184-a911-506620889ebe" containerID="08c14654c73a75d12afb5bd475fed280c17c7f14c57b1dc11239ecb9bd29aad6" exitCode=0 Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.169585 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" event={"ID":"41434dfd-3fc3-4184-a911-506620889ebe","Type":"ContainerDied","Data":"08c14654c73a75d12afb5bd475fed280c17c7f14c57b1dc11239ecb9bd29aad6"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.171142 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7bc3a61683e1b23e4f6f6cb2fafde447ad1250e28db728a208afa8e89590326f"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.171162 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5ccd2d3214a91238f47be50b4620c688de60fa87ed12b64ef26217047f37f991"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.171738 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.185150 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:07 crc kubenswrapper[4853]: E0127 18:45:07.186095 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.686068629 +0000 UTC m=+150.148611572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.193532 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a094b467a71a10df27798747cbd79d25b5f97a60633c64293796be96a70ee2e9"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.193574 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"692a4caa47b3f580286ce7f9ecffc09f1dd81c3f82d621db8e3ed9f434de6864"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.206455 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1671da84b42e66e82f3535b0a0d29a4937539402c1b22c6862ad8e4b589afedc"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.206506 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4e70fdb6dee449de4d9d663efd629799eddeed8048d5feff6dc5292fb4c7dfbd"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.234600 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" event={"ID":"5a20718d-a359-4670-86a3-4f32a2b11f53","Type":"ContainerStarted","Data":"208ec4afe1776e8646dea77cd56e0044ccf0ac118bb2180be064f36e2c0f0e06"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.237382 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerStarted","Data":"ba2560ce0a3001a0d45e9e989f35b0492ff706401996fd784bf1eddba3fac83f"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.237407 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerStarted","Data":"8ecbe4e887e908e5eb23d03cb5955712fd6c1d752d2a0dab0142762d21752172"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.239176 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.239691 4853 generic.go:334] "Generic (PLEG): container finished" podID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerID="2c4bfc0e0328c1a63153e81b4c89617ba5656571e1f804b867aa7ce4ce2d0f59" exitCode=0 Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.240197 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerDied","Data":"2c4bfc0e0328c1a63153e81b4c89617ba5656571e1f804b867aa7ce4ce2d0f59"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.240219 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerStarted","Data":"433803f6544dd3c6fddfdb38079d3dae8f39d1bfdaba06144ef98b89b6c10cc6"} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.272622 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pflg"] Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.274394 4853 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T18:45:06.545911306Z","Handler":null,"Name":""} Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.288012 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:07 crc kubenswrapper[4853]: E0127 18:45:07.289864 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:45:07.789845835 +0000 UTC m=+150.252388808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-npp4j" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.306161 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6l6nh" podStartSLOduration=11.306134805 podStartE2EDuration="11.306134805s" podCreationTimestamp="2026-01-27 18:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:07.298250047 +0000 UTC m=+149.760792960" watchObservedRunningTime="2026-01-27 18:45:07.306134805 +0000 UTC m=+149.768677738" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.307316 4853 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.307358 4853 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.331001 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pfch"] Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.388740 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.415169 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.491846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.495499 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.495542 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.559343 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-npp4j\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.603540 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.733690 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:07 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:07 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:07 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.733750 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:07 crc kubenswrapper[4853]: I0127 18:45:07.929207 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-npp4j"] Jan 27 18:45:07 crc kubenswrapper[4853]: W0127 18:45:07.940960 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c9b9f7_1d12_4e77_a47f_8cb601836611.slice/crio-c825e28713bdf55cd460511837a7f50426c0c510d1f5ad57b76ca682afa2fe81 WatchSource:0}: Error finding container c825e28713bdf55cd460511837a7f50426c0c510d1f5ad57b76ca682afa2fe81: Status 404 returned error can't find the container with id c825e28713bdf55cd460511837a7f50426c0c510d1f5ad57b76ca682afa2fe81 Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.055283 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-chjrw"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.056496 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.061561 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.100017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfx28\" (UniqueName: \"kubernetes.io/projected/d6cf1fd9-633e-45c8-b007-051a740ff435-kube-api-access-lfx28\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.100084 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-utilities\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.100134 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-catalog-content\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.113782 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chjrw"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.124238 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.201515 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-utilities\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.201570 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-catalog-content\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.201647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfx28\" (UniqueName: \"kubernetes.io/projected/d6cf1fd9-633e-45c8-b007-051a740ff435-kube-api-access-lfx28\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.202322 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-utilities\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.202528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-catalog-content\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.226513 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfx28\" (UniqueName: \"kubernetes.io/projected/d6cf1fd9-633e-45c8-b007-051a740ff435-kube-api-access-lfx28\") pod \"redhat-marketplace-chjrw\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.245974 4853 generic.go:334] "Generic (PLEG): container finished" podID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerID="066761b9a06cda8570b3019548ff33ae9591182ab7d99cd6a3cb280197168abe" exitCode=0 Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.246040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pflg" event={"ID":"a442dc5b-e830-490b-8ad1-6a6606fea52b","Type":"ContainerDied","Data":"066761b9a06cda8570b3019548ff33ae9591182ab7d99cd6a3cb280197168abe"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.246075 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pflg" event={"ID":"a442dc5b-e830-490b-8ad1-6a6606fea52b","Type":"ContainerStarted","Data":"c15c999011ede48e043ec36e6f37d0206b69f723c7c2857166bab5a232406b96"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.249875 4853 generic.go:334] "Generic (PLEG): container finished" podID="bdf3b7ad-3545-4192-941a-862154002694" containerID="ba2560ce0a3001a0d45e9e989f35b0492ff706401996fd784bf1eddba3fac83f" exitCode=0 Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.249907 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerDied","Data":"ba2560ce0a3001a0d45e9e989f35b0492ff706401996fd784bf1eddba3fac83f"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.252888 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" event={"ID":"a7c9b9f7-1d12-4e77-a47f-8cb601836611","Type":"ContainerStarted","Data":"b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.252931 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" event={"ID":"a7c9b9f7-1d12-4e77-a47f-8cb601836611","Type":"ContainerStarted","Data":"c825e28713bdf55cd460511837a7f50426c0c510d1f5ad57b76ca682afa2fe81"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.253628 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.255265 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.255776 4853 generic.go:334] "Generic (PLEG): container finished" podID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerID="517754bd0b507f4f4bd3ef8a74b4be1f376c19629f4f376548b119c6d1b1ef97" exitCode=0 Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.257283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pfch" event={"ID":"8bb82662-0739-432b-93b0-c5f1bc3ed268","Type":"ContainerDied","Data":"517754bd0b507f4f4bd3ef8a74b4be1f376c19629f4f376548b119c6d1b1ef97"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.257309 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pfch" event={"ID":"8bb82662-0739-432b-93b0-c5f1bc3ed268","Type":"ContainerStarted","Data":"9c89fa2d24305310c99dfe064228cd2a51eebee0d40d21438078536660cc600d"} Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.257385 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.267849 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.268176 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.269941 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.313477 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.318720 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.357234 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" podStartSLOduration=125.357217922 podStartE2EDuration="2m5.357217922s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:08.35682059 +0000 UTC m=+150.819363473" watchObservedRunningTime="2026-01-27 18:45:08.357217922 +0000 UTC m=+150.819760805" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.374395 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.420771 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.420837 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.421834 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.441410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.455378 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxfb"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.456333 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.473739 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxfb"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.617410 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.624033 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-catalog-content\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.624101 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6r8w\" (UniqueName: \"kubernetes.io/projected/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-kube-api-access-d6r8w\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.624135 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-utilities\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.722353 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:08 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:08 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:08 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.722649 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.725731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6r8w\" (UniqueName: \"kubernetes.io/projected/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-kube-api-access-d6r8w\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.726074 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-utilities\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.726628 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-utilities\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.726739 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-catalog-content\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.726978 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-catalog-content\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.746271 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.752646 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6r8w\" (UniqueName: \"kubernetes.io/projected/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-kube-api-access-d6r8w\") pod \"redhat-marketplace-vwxfb\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.776177 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.929781 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41434dfd-3fc3-4184-a911-506620889ebe-config-volume\") pod \"41434dfd-3fc3-4184-a911-506620889ebe\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.929830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41434dfd-3fc3-4184-a911-506620889ebe-secret-volume\") pod \"41434dfd-3fc3-4184-a911-506620889ebe\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.929865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczwc\" (UniqueName: \"kubernetes.io/projected/41434dfd-3fc3-4184-a911-506620889ebe-kube-api-access-kczwc\") pod \"41434dfd-3fc3-4184-a911-506620889ebe\" (UID: \"41434dfd-3fc3-4184-a911-506620889ebe\") " Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.932601 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41434dfd-3fc3-4184-a911-506620889ebe-config-volume" (OuterVolumeSpecName: "config-volume") pod "41434dfd-3fc3-4184-a911-506620889ebe" (UID: "41434dfd-3fc3-4184-a911-506620889ebe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.939953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41434dfd-3fc3-4184-a911-506620889ebe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "41434dfd-3fc3-4184-a911-506620889ebe" (UID: "41434dfd-3fc3-4184-a911-506620889ebe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.940745 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41434dfd-3fc3-4184-a911-506620889ebe-kube-api-access-kczwc" (OuterVolumeSpecName: "kube-api-access-kczwc") pod "41434dfd-3fc3-4184-a911-506620889ebe" (UID: "41434dfd-3fc3-4184-a911-506620889ebe"). InnerVolumeSpecName "kube-api-access-kczwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.947726 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:45:08 crc kubenswrapper[4853]: I0127 18:45:08.990290 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chjrw"] Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.033054 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41434dfd-3fc3-4184-a911-506620889ebe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.033086 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41434dfd-3fc3-4184-a911-506620889ebe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.033100 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kczwc\" (UniqueName: \"kubernetes.io/projected/41434dfd-3fc3-4184-a911-506620889ebe-kube-api-access-kczwc\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.062994 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vz77p"] Jan 27 18:45:09 crc kubenswrapper[4853]: E0127 18:45:09.063421 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41434dfd-3fc3-4184-a911-506620889ebe" containerName="collect-profiles" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.063436 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="41434dfd-3fc3-4184-a911-506620889ebe" containerName="collect-profiles" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.063585 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="41434dfd-3fc3-4184-a911-506620889ebe" containerName="collect-profiles" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.068271 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.068932 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vz77p"] Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.070892 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.071445 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.090405 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.102869 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.115872 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-74rmt" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.143687 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.144004 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.143700 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.144243 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.159069 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.159143 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.192072 4853 patch_prober.go:28] interesting pod/console-f9d7485db-9vd4d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.192139 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9vd4d" podUID="2bd6e097-af15-41a1-9ab2-a4e79adef815" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.217747 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.230028 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tqzdv" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.234677 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9694\" (UniqueName: \"kubernetes.io/projected/8fc489cf-508e-445d-ba19-4aeea8afee8c-kube-api-access-h9694\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.234796 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-catalog-content\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.234928 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-utilities\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.277553 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxfb"] Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.340176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerStarted","Data":"5ce35b3994b69b3455867bba37ae487fecb7fda1c38151b594d22c64ea8109de"} Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.340221 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerStarted","Data":"5745beb1602d8b2dc1452626fbe6201ccff605c2aae47216fd5876e8cea8ad3e"} Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.341392 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-utilities\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.341533 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9694\" (UniqueName: \"kubernetes.io/projected/8fc489cf-508e-445d-ba19-4aeea8afee8c-kube-api-access-h9694\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.341579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-catalog-content\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.343645 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-utilities\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.344430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-catalog-content\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.368031 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" event={"ID":"41434dfd-3fc3-4184-a911-506620889ebe","Type":"ContainerDied","Data":"70a5250002bbbf5d31662f298d2c7365b6767f01fa860c62f6e69a519d5f1b5c"} Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.368109 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a5250002bbbf5d31662f298d2c7365b6767f01fa860c62f6e69a519d5f1b5c" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.368243 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.400379 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf10ee07-95a0-407c-9660-242d8fd5bbd9","Type":"ContainerStarted","Data":"36591cce9a22bea00eff2d3b8498a058332311699e9de9776dcca1884f66ba39"} Jan 27 18:45:09 crc kubenswrapper[4853]: W0127 18:45:09.403533 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa41f430_60c0_4d83_96bc_ac2a6aa2dde1.slice/crio-45516a61e3397a28ca9d6b7c6d6f6e6f1e033e92f25e63c922b3c800e620cafc WatchSource:0}: Error finding container 45516a61e3397a28ca9d6b7c6d6f6e6f1e033e92f25e63c922b3c800e620cafc: Status 404 returned error can't find the container with id 45516a61e3397a28ca9d6b7c6d6f6e6f1e033e92f25e63c922b3c800e620cafc Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.414034 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9694\" (UniqueName: \"kubernetes.io/projected/8fc489cf-508e-445d-ba19-4aeea8afee8c-kube-api-access-h9694\") pod \"redhat-operators-vz77p\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.420196 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.497449 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-752jj"] Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.498504 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.540950 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-752jj"] Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.645782 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-utilities\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.645876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sptgm\" (UniqueName: \"kubernetes.io/projected/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-kube-api-access-sptgm\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.645897 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-catalog-content\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.715169 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.719715 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:09 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:09 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:09 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.719774 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.736641 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.746836 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sptgm\" (UniqueName: \"kubernetes.io/projected/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-kube-api-access-sptgm\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.746881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-catalog-content\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.746932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-utilities\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.747662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-utilities\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.773138 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sptgm\" (UniqueName: \"kubernetes.io/projected/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-kube-api-access-sptgm\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.794629 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-catalog-content\") pod \"redhat-operators-752jj\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.869409 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:45:09 crc kubenswrapper[4853]: I0127 18:45:09.895778 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vz77p"] Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.181407 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-752jj"] Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.410173 4853 generic.go:334] "Generic (PLEG): container finished" podID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerID="5ce35b3994b69b3455867bba37ae487fecb7fda1c38151b594d22c64ea8109de" exitCode=0 Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.410357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerDied","Data":"5ce35b3994b69b3455867bba37ae487fecb7fda1c38151b594d22c64ea8109de"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.412628 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerStarted","Data":"cd84f4eb7564ebb8ecd831a8d9000fe2d48b988c50d2b8918bcb9b7205bcfea6"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.412670 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerStarted","Data":"437d194752a4964ccbdcb6d8bb8773e8069569b226f3cd707cfd786ad0519a43"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.415965 4853 generic.go:334] "Generic (PLEG): container finished" podID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerID="1eb5cebd532853b23144982fc54bb8a1193498761d2378313ef7af66c94905d7" exitCode=0 Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.416014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxfb" event={"ID":"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1","Type":"ContainerDied","Data":"1eb5cebd532853b23144982fc54bb8a1193498761d2378313ef7af66c94905d7"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.416045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxfb" event={"ID":"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1","Type":"ContainerStarted","Data":"45516a61e3397a28ca9d6b7c6d6f6e6f1e033e92f25e63c922b3c800e620cafc"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.437512 4853 generic.go:334] "Generic (PLEG): container finished" podID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerID="bbe3e1278ee2ecb3603760ea6727963aa21d8230b843ae13fd40aed4bdd7e0b7" exitCode=0 Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.437610 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerDied","Data":"bbe3e1278ee2ecb3603760ea6727963aa21d8230b843ae13fd40aed4bdd7e0b7"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.437635 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerStarted","Data":"529863dfe0ab72840a1f449dbd5a7b42da75cb124e984c88641a5ac80bf13024"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.443204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf10ee07-95a0-407c-9660-242d8fd5bbd9","Type":"ContainerStarted","Data":"f338adfe7643b242b6c8f27a1c9272999714971a106a7409ce66f0a438b5bcd1"} Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.472137 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.4720998 podStartE2EDuration="2.4720998s" podCreationTimestamp="2026-01-27 18:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:10.469278489 +0000 UTC m=+152.931821372" watchObservedRunningTime="2026-01-27 18:45:10.4720998 +0000 UTC m=+152.934642683" Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.718804 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:10 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:10 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:10 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:10 crc kubenswrapper[4853]: I0127 18:45:10.718875 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:11 crc kubenswrapper[4853]: I0127 18:45:11.451107 4853 generic.go:334] "Generic (PLEG): container finished" podID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerID="cd84f4eb7564ebb8ecd831a8d9000fe2d48b988c50d2b8918bcb9b7205bcfea6" exitCode=0 Jan 27 18:45:11 crc kubenswrapper[4853]: I0127 18:45:11.451185 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerDied","Data":"cd84f4eb7564ebb8ecd831a8d9000fe2d48b988c50d2b8918bcb9b7205bcfea6"} Jan 27 18:45:11 crc kubenswrapper[4853]: I0127 18:45:11.459436 4853 generic.go:334] "Generic (PLEG): container finished" podID="bf10ee07-95a0-407c-9660-242d8fd5bbd9" containerID="f338adfe7643b242b6c8f27a1c9272999714971a106a7409ce66f0a438b5bcd1" exitCode=0 Jan 27 18:45:11 crc kubenswrapper[4853]: I0127 18:45:11.459478 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf10ee07-95a0-407c-9660-242d8fd5bbd9","Type":"ContainerDied","Data":"f338adfe7643b242b6c8f27a1c9272999714971a106a7409ce66f0a438b5bcd1"} Jan 27 18:45:11 crc kubenswrapper[4853]: I0127 18:45:11.717831 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:11 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:11 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:11 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:11 crc kubenswrapper[4853]: I0127 18:45:11.717886 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.728328 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:12 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:12 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:12 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.728599 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.860451 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.861432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.864405 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.864651 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.871988 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.911967 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:12 crc kubenswrapper[4853]: I0127 18:45:12.912031 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.013779 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.013881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.014216 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.047433 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.048375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.116635 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kubelet-dir\") pod \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.116702 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kube-api-access\") pod \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\" (UID: \"bf10ee07-95a0-407c-9660-242d8fd5bbd9\") " Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.116737 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf10ee07-95a0-407c-9660-242d8fd5bbd9" (UID: "bf10ee07-95a0-407c-9660-242d8fd5bbd9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.116981 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.123329 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf10ee07-95a0-407c-9660-242d8fd5bbd9" (UID: "bf10ee07-95a0-407c-9660-242d8fd5bbd9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.208176 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.218415 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf10ee07-95a0-407c-9660-242d8fd5bbd9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.487144 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf10ee07-95a0-407c-9660-242d8fd5bbd9","Type":"ContainerDied","Data":"36591cce9a22bea00eff2d3b8498a058332311699e9de9776dcca1884f66ba39"} Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.487194 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36591cce9a22bea00eff2d3b8498a058332311699e9de9776dcca1884f66ba39" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.487255 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.677809 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.719437 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:13 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:13 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:13 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:13 crc kubenswrapper[4853]: I0127 18:45:13.719497 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:13 crc kubenswrapper[4853]: W0127 18:45:13.719904 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod09e14ddf_5acc_4b5d_b468_ae4df790b8e3.slice/crio-50da1f01be7265a9e2708b95dfa455dff9d99bab6784d5ec6903caa4eaa30e75 WatchSource:0}: Error finding container 50da1f01be7265a9e2708b95dfa455dff9d99bab6784d5ec6903caa4eaa30e75: Status 404 returned error can't find the container with id 50da1f01be7265a9e2708b95dfa455dff9d99bab6784d5ec6903caa4eaa30e75 Jan 27 18:45:14 crc kubenswrapper[4853]: I0127 18:45:14.492985 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-52vxj" Jan 27 18:45:14 crc kubenswrapper[4853]: I0127 18:45:14.497177 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09e14ddf-5acc-4b5d-b468-ae4df790b8e3","Type":"ContainerStarted","Data":"50da1f01be7265a9e2708b95dfa455dff9d99bab6784d5ec6903caa4eaa30e75"} Jan 27 18:45:14 crc kubenswrapper[4853]: I0127 18:45:14.717791 4853 patch_prober.go:28] interesting pod/router-default-5444994796-fhmft container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:45:14 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Jan 27 18:45:14 crc kubenswrapper[4853]: [+]process-running ok Jan 27 18:45:14 crc kubenswrapper[4853]: healthz check failed Jan 27 18:45:14 crc kubenswrapper[4853]: I0127 18:45:14.718058 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhmft" podUID="cb428fe7-0d8c-4f25-b377-880388daf6aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:45:15 crc kubenswrapper[4853]: I0127 18:45:15.517868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09e14ddf-5acc-4b5d-b468-ae4df790b8e3","Type":"ContainerStarted","Data":"6a11e99ed02f79da0b7db16728b110e547a692fd24e4a77e956495e30b7e3d58"} Jan 27 18:45:15 crc kubenswrapper[4853]: I0127 18:45:15.547536 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.547516404 podStartE2EDuration="3.547516404s" podCreationTimestamp="2026-01-27 18:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:15.531573654 +0000 UTC m=+157.994116557" watchObservedRunningTime="2026-01-27 18:45:15.547516404 +0000 UTC m=+158.010059287" Jan 27 18:45:15 crc kubenswrapper[4853]: I0127 18:45:15.720484 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:45:15 crc kubenswrapper[4853]: I0127 18:45:15.722861 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fhmft" Jan 27 18:45:16 crc kubenswrapper[4853]: I0127 18:45:16.541947 4853 generic.go:334] "Generic (PLEG): container finished" podID="09e14ddf-5acc-4b5d-b468-ae4df790b8e3" containerID="6a11e99ed02f79da0b7db16728b110e547a692fd24e4a77e956495e30b7e3d58" exitCode=0 Jan 27 18:45:16 crc kubenswrapper[4853]: I0127 18:45:16.542025 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09e14ddf-5acc-4b5d-b468-ae4df790b8e3","Type":"ContainerDied","Data":"6a11e99ed02f79da0b7db16728b110e547a692fd24e4a77e956495e30b7e3d58"} Jan 27 18:45:19 crc kubenswrapper[4853]: I0127 18:45:19.142265 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:19 crc kubenswrapper[4853]: I0127 18:45:19.142837 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:19 crc kubenswrapper[4853]: I0127 18:45:19.144617 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gqxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 18:45:19 crc kubenswrapper[4853]: I0127 18:45:19.144660 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9gqxt" podUID="be5a36ff-f665-4468-b7ae-8a443f0164e8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 18:45:19 crc kubenswrapper[4853]: I0127 18:45:19.163029 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:45:19 crc kubenswrapper[4853]: I0127 18:45:19.167394 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.081374 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.203837 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kubelet-dir\") pod \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.203970 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "09e14ddf-5acc-4b5d-b468-ae4df790b8e3" (UID: "09e14ddf-5acc-4b5d-b468-ae4df790b8e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.203995 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kube-api-access\") pod \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\" (UID: \"09e14ddf-5acc-4b5d-b468-ae4df790b8e3\") " Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.204252 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.225781 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "09e14ddf-5acc-4b5d-b468-ae4df790b8e3" (UID: "09e14ddf-5acc-4b5d-b468-ae4df790b8e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.305917 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09e14ddf-5acc-4b5d-b468-ae4df790b8e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.593466 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09e14ddf-5acc-4b5d-b468-ae4df790b8e3","Type":"ContainerDied","Data":"50da1f01be7265a9e2708b95dfa455dff9d99bab6784d5ec6903caa4eaa30e75"} Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.593502 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50da1f01be7265a9e2708b95dfa455dff9d99bab6784d5ec6903caa4eaa30e75" Jan 27 18:45:22 crc kubenswrapper[4853]: I0127 18:45:22.593517 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:24 crc kubenswrapper[4853]: I0127 18:45:24.387594 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kb26l"] Jan 27 18:45:24 crc kubenswrapper[4853]: I0127 18:45:24.388011 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" containerID="cri-o://0d46340db223ae175b54107a3527edd22d50e0436dfc42103550e1ed0c551499" gracePeriod=30 Jan 27 18:45:24 crc kubenswrapper[4853]: I0127 18:45:24.396774 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v"] Jan 27 18:45:24 crc kubenswrapper[4853]: I0127 18:45:24.397311 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" containerID="cri-o://d783566aec88ec506250b521a4e991a13c79910737751d6209d7da083b3dc7bf" gracePeriod=30 Jan 27 18:45:25 crc kubenswrapper[4853]: I0127 18:45:25.545460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:45:25 crc kubenswrapper[4853]: I0127 18:45:25.551639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29407244-fbfe-4d37-a33e-7d59df1c22fd-metrics-certs\") pod \"network-metrics-daemon-wdzg4\" (UID: \"29407244-fbfe-4d37-a33e-7d59df1c22fd\") " pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:45:25 crc kubenswrapper[4853]: I0127 18:45:25.630050 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wdzg4" Jan 27 18:45:27 crc kubenswrapper[4853]: I0127 18:45:27.608794 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:45:28 crc kubenswrapper[4853]: I0127 18:45:28.353650 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mp44v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 18:45:28 crc kubenswrapper[4853]: I0127 18:45:28.353965 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 18:45:29 crc kubenswrapper[4853]: I0127 18:45:29.147886 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9gqxt" Jan 27 18:45:30 crc kubenswrapper[4853]: I0127 18:45:30.706503 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kb26l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:45:30 crc kubenswrapper[4853]: I0127 18:45:30.706573 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:45:35 crc kubenswrapper[4853]: I0127 18:45:35.541264 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:45:35 crc kubenswrapper[4853]: I0127 18:45:35.541593 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:45:37 crc kubenswrapper[4853]: I0127 18:45:37.675863 4853 generic.go:334] "Generic (PLEG): container finished" podID="58200e7b-a0e9-47ba-8581-42878da87f40" containerID="d783566aec88ec506250b521a4e991a13c79910737751d6209d7da083b3dc7bf" exitCode=0 Jan 27 18:45:37 crc kubenswrapper[4853]: I0127 18:45:37.675942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" event={"ID":"58200e7b-a0e9-47ba-8581-42878da87f40","Type":"ContainerDied","Data":"d783566aec88ec506250b521a4e991a13c79910737751d6209d7da083b3dc7bf"} Jan 27 18:45:37 crc kubenswrapper[4853]: I0127 18:45:37.678060 4853 generic.go:334] "Generic (PLEG): container finished" podID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerID="0d46340db223ae175b54107a3527edd22d50e0436dfc42103550e1ed0c551499" exitCode=0 Jan 27 18:45:37 crc kubenswrapper[4853]: I0127 18:45:37.678087 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" event={"ID":"6e624c88-c6c1-4c35-985b-264173a9abcd","Type":"ContainerDied","Data":"0d46340db223ae175b54107a3527edd22d50e0436dfc42103550e1ed0c551499"} Jan 27 18:45:37 crc kubenswrapper[4853]: E0127 18:45:37.761497 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:45:37 crc kubenswrapper[4853]: E0127 18:45:37.761676 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpgmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ck67k_openshift-marketplace(bdf3b7ad-3545-4192-941a-862154002694): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:37 crc kubenswrapper[4853]: E0127 18:45:37.762769 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ck67k" podUID="bdf3b7ad-3545-4192-941a-862154002694" Jan 27 18:45:38 crc kubenswrapper[4853]: I0127 18:45:38.353652 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mp44v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 18:45:38 crc kubenswrapper[4853]: I0127 18:45:38.353719 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 18:45:39 crc kubenswrapper[4853]: I0127 18:45:39.692082 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qbgzv" Jan 27 18:45:39 crc kubenswrapper[4853]: I0127 18:45:39.705825 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kb26l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 18:45:39 crc kubenswrapper[4853]: I0127 18:45:39.705876 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 18:45:45 crc kubenswrapper[4853]: E0127 18:45:45.280148 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ck67k" podUID="bdf3b7ad-3545-4192-941a-862154002694" Jan 27 18:45:45 crc kubenswrapper[4853]: I0127 18:45:45.368026 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:45:47 crc kubenswrapper[4853]: E0127 18:45:47.676434 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 18:45:47 crc kubenswrapper[4853]: E0127 18:45:47.676592 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sptgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-752jj_openshift-marketplace(5b61ecec-2b42-40ef-b2c5-d719cc45ab64): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:47 crc kubenswrapper[4853]: E0127 18:45:47.678293 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-752jj" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.353757 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mp44v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.353825 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.650470 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.650722 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf10ee07-95a0-407c-9660-242d8fd5bbd9" containerName="pruner" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.650739 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf10ee07-95a0-407c-9660-242d8fd5bbd9" containerName="pruner" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.650762 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e14ddf-5acc-4b5d-b468-ae4df790b8e3" containerName="pruner" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.650769 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e14ddf-5acc-4b5d-b468-ae4df790b8e3" containerName="pruner" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.650876 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf10ee07-95a0-407c-9660-242d8fd5bbd9" containerName="pruner" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.650899 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e14ddf-5acc-4b5d-b468-ae4df790b8e3" containerName="pruner" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.651325 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.653493 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.654341 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.664920 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.708907 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-752jj" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.744495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a823121-5107-4a97-a876-604a1cbd7ff9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.744593 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a823121-5107-4a97-a876-604a1cbd7ff9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.765675 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.765844 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss6mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8pfch_openshift-marketplace(8bb82662-0739-432b-93b0-c5f1bc3ed268): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.767053 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8pfch" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.846154 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a823121-5107-4a97-a876-604a1cbd7ff9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.846592 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a823121-5107-4a97-a876-604a1cbd7ff9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.846740 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a823121-5107-4a97-a876-604a1cbd7ff9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.863252 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.863403 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-chjrw_openshift-marketplace(d6cf1fd9-633e-45c8-b007-051a740ff435): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.864311 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a823121-5107-4a97-a876-604a1cbd7ff9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:48 crc kubenswrapper[4853]: E0127 18:45:48.864833 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-chjrw" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" Jan 27 18:45:48 crc kubenswrapper[4853]: I0127 18:45:48.983399 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:49 crc kubenswrapper[4853]: I0127 18:45:49.705405 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kb26l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 18:45:49 crc kubenswrapper[4853]: I0127 18:45:49.705737 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.195773 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8pfch" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.195773 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-chjrw" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.316605 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.317090 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fwnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p74jj_openshift-marketplace(81272aef-67fa-4c09-bf30-56fdfec7dd7b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.318227 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p74jj" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.364537 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.364688 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9694,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vz77p_openshift-marketplace(8fc489cf-508e-445d-ba19-4aeea8afee8c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.365871 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vz77p" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.411893 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.412042 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95z5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6pflg_openshift-marketplace(a442dc5b-e830-490b-8ad1-6a6606fea52b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.413233 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6pflg" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.463513 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.463685 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6r8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vwxfb_openshift-marketplace(aa41f430-60c0-4d83-96bc-ac2a6aa2dde1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.464894 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vwxfb" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.482473 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.488578 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wdzg4"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.491141 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:45:50 crc kubenswrapper[4853]: W0127 18:45:50.496227 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29407244_fbfe_4d37_a33e_7d59df1c22fd.slice/crio-93c2b99bcc5e03d5791740ae0fa4cab2be1e7053dfb88b46a850a6ef7097dddd WatchSource:0}: Error finding container 93c2b99bcc5e03d5791740ae0fa4cab2be1e7053dfb88b46a850a6ef7097dddd: Status 404 returned error can't find the container with id 93c2b99bcc5e03d5791740ae0fa4cab2be1e7053dfb88b46a850a6ef7097dddd Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.525577 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-767fcbd767-fj52r"] Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.525831 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.525843 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.525857 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.525866 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.525973 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" containerName="controller-manager" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.525994 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" containerName="route-controller-manager" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.526511 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.528205 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767fcbd767-fj52r"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.566998 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-proxy-ca-bundles\") pod \"6e624c88-c6c1-4c35-985b-264173a9abcd\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.567045 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e624c88-c6c1-4c35-985b-264173a9abcd-serving-cert\") pod \"6e624c88-c6c1-4c35-985b-264173a9abcd\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.567082 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-config\") pod \"58200e7b-a0e9-47ba-8581-42878da87f40\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.567141 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-config\") pod \"6e624c88-c6c1-4c35-985b-264173a9abcd\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.567874 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6e624c88-c6c1-4c35-985b-264173a9abcd" (UID: "6e624c88-c6c1-4c35-985b-264173a9abcd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.567925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-config" (OuterVolumeSpecName: "config") pod "58200e7b-a0e9-47ba-8581-42878da87f40" (UID: "58200e7b-a0e9-47ba-8581-42878da87f40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.567949 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-config" (OuterVolumeSpecName: "config") pod "6e624c88-c6c1-4c35-985b-264173a9abcd" (UID: "6e624c88-c6c1-4c35-985b-264173a9abcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.568017 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-client-ca\") pod \"6e624c88-c6c1-4c35-985b-264173a9abcd\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.568046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4tjx\" (UniqueName: \"kubernetes.io/projected/58200e7b-a0e9-47ba-8581-42878da87f40-kube-api-access-t4tjx\") pod \"58200e7b-a0e9-47ba-8581-42878da87f40\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.568586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e624c88-c6c1-4c35-985b-264173a9abcd" (UID: "6e624c88-c6c1-4c35-985b-264173a9abcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.568736 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58200e7b-a0e9-47ba-8581-42878da87f40-serving-cert\") pod \"58200e7b-a0e9-47ba-8581-42878da87f40\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.568769 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-client-ca\") pod \"58200e7b-a0e9-47ba-8581-42878da87f40\" (UID: \"58200e7b-a0e9-47ba-8581-42878da87f40\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569237 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzf49\" (UniqueName: \"kubernetes.io/projected/6e624c88-c6c1-4c35-985b-264173a9abcd-kube-api-access-vzf49\") pod \"6e624c88-c6c1-4c35-985b-264173a9abcd\" (UID: \"6e624c88-c6c1-4c35-985b-264173a9abcd\") " Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569310 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-client-ca" (OuterVolumeSpecName: "client-ca") pod "58200e7b-a0e9-47ba-8581-42878da87f40" (UID: "58200e7b-a0e9-47ba-8581-42878da87f40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569533 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569554 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569564 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569574 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58200e7b-a0e9-47ba-8581-42878da87f40-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.569586 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e624c88-c6c1-4c35-985b-264173a9abcd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.572694 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e624c88-c6c1-4c35-985b-264173a9abcd-kube-api-access-vzf49" (OuterVolumeSpecName: "kube-api-access-vzf49") pod "6e624c88-c6c1-4c35-985b-264173a9abcd" (UID: "6e624c88-c6c1-4c35-985b-264173a9abcd"). InnerVolumeSpecName "kube-api-access-vzf49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.572836 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e624c88-c6c1-4c35-985b-264173a9abcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e624c88-c6c1-4c35-985b-264173a9abcd" (UID: "6e624c88-c6c1-4c35-985b-264173a9abcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.572910 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58200e7b-a0e9-47ba-8581-42878da87f40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58200e7b-a0e9-47ba-8581-42878da87f40" (UID: "58200e7b-a0e9-47ba-8581-42878da87f40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.572939 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58200e7b-a0e9-47ba-8581-42878da87f40-kube-api-access-t4tjx" (OuterVolumeSpecName: "kube-api-access-t4tjx") pod "58200e7b-a0e9-47ba-8581-42878da87f40" (UID: "58200e7b-a0e9-47ba-8581-42878da87f40"). InnerVolumeSpecName "kube-api-access-t4tjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.668332 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.670730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-config\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.670772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-client-ca\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.670846 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec16618-1da7-467d-82fe-92d01c07ffdc-serving-cert\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.670919 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-proxy-ca-bundles\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.670964 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc2z\" (UniqueName: \"kubernetes.io/projected/1ec16618-1da7-467d-82fe-92d01c07ffdc-kube-api-access-cbc2z\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.671055 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58200e7b-a0e9-47ba-8581-42878da87f40-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.671073 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzf49\" (UniqueName: \"kubernetes.io/projected/6e624c88-c6c1-4c35-985b-264173a9abcd-kube-api-access-vzf49\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.671086 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e624c88-c6c1-4c35-985b-264173a9abcd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.671097 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4tjx\" (UniqueName: \"kubernetes.io/projected/58200e7b-a0e9-47ba-8581-42878da87f40-kube-api-access-t4tjx\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:50 crc kubenswrapper[4853]: W0127 18:45:50.677683 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5a823121_5107_4a97_a876_604a1cbd7ff9.slice/crio-c3b8aba6b62d930414f5bc1f5cba9f43e00059cabc5e16753a44bc6e1e7b1945 WatchSource:0}: Error finding container c3b8aba6b62d930414f5bc1f5cba9f43e00059cabc5e16753a44bc6e1e7b1945: Status 404 returned error can't find the container with id c3b8aba6b62d930414f5bc1f5cba9f43e00059cabc5e16753a44bc6e1e7b1945 Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.736937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" event={"ID":"6e624c88-c6c1-4c35-985b-264173a9abcd","Type":"ContainerDied","Data":"c1177332ed80c9af8e63362afbda95c6d2d6b0477c73b01728b0976da37e4c0a"} Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.737417 4853 scope.go:117] "RemoveContainer" containerID="0d46340db223ae175b54107a3527edd22d50e0436dfc42103550e1ed0c551499" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.737175 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kb26l" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.739537 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.740344 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v" event={"ID":"58200e7b-a0e9-47ba-8581-42878da87f40","Type":"ContainerDied","Data":"83a0b5f4f8869561de9cc4d6190147f92f4a149e63547251fa9f6afa607820a4"} Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.745788 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a823121-5107-4a97-a876-604a1cbd7ff9","Type":"ContainerStarted","Data":"c3b8aba6b62d930414f5bc1f5cba9f43e00059cabc5e16753a44bc6e1e7b1945"} Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.747923 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" event={"ID":"29407244-fbfe-4d37-a33e-7d59df1c22fd","Type":"ContainerStarted","Data":"13f5cd47a036339d2b193584dda794c9b04282ca77c299dee1091434f8aabcd8"} Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.747956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" event={"ID":"29407244-fbfe-4d37-a33e-7d59df1c22fd","Type":"ContainerStarted","Data":"93c2b99bcc5e03d5791740ae0fa4cab2be1e7053dfb88b46a850a6ef7097dddd"} Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.772092 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-proxy-ca-bundles\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.772157 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbc2z\" (UniqueName: \"kubernetes.io/projected/1ec16618-1da7-467d-82fe-92d01c07ffdc-kube-api-access-cbc2z\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.772188 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-config\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.772206 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-client-ca\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.772235 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec16618-1da7-467d-82fe-92d01c07ffdc-serving-cert\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.774285 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-config\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.777077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-client-ca\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.778517 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-proxy-ca-bundles\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.780423 4853 scope.go:117] "RemoveContainer" containerID="d783566aec88ec506250b521a4e991a13c79910737751d6209d7da083b3dc7bf" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.780450 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6pflg" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.780634 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p74jj" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.782786 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vz77p" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" Jan 27 18:45:50 crc kubenswrapper[4853]: E0127 18:45:50.782899 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vwxfb" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.783579 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec16618-1da7-467d-82fe-92d01c07ffdc-serving-cert\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.796997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbc2z\" (UniqueName: \"kubernetes.io/projected/1ec16618-1da7-467d-82fe-92d01c07ffdc-kube-api-access-cbc2z\") pod \"controller-manager-767fcbd767-fj52r\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.806921 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.809997 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mp44v"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.836214 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kb26l"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.839420 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kb26l"] Jan 27 18:45:50 crc kubenswrapper[4853]: I0127 18:45:50.845640 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.036316 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767fcbd767-fj52r"] Jan 27 18:45:51 crc kubenswrapper[4853]: W0127 18:45:51.044353 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec16618_1da7_467d_82fe_92d01c07ffdc.slice/crio-1dfc8fa943e1c15cfbc656f144e02bb9e52c3920d68fb9d569ff3e393b6fe625 WatchSource:0}: Error finding container 1dfc8fa943e1c15cfbc656f144e02bb9e52c3920d68fb9d569ff3e393b6fe625: Status 404 returned error can't find the container with id 1dfc8fa943e1c15cfbc656f144e02bb9e52c3920d68fb9d569ff3e393b6fe625 Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.761989 4853 generic.go:334] "Generic (PLEG): container finished" podID="5a823121-5107-4a97-a876-604a1cbd7ff9" containerID="9904ad6a64e5c3c98ea8050ff11830e41ade948f3d459360b090991b5d495b31" exitCode=0 Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.762060 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a823121-5107-4a97-a876-604a1cbd7ff9","Type":"ContainerDied","Data":"9904ad6a64e5c3c98ea8050ff11830e41ade948f3d459360b090991b5d495b31"} Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.764269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" event={"ID":"1ec16618-1da7-467d-82fe-92d01c07ffdc","Type":"ContainerStarted","Data":"efb4e04340fedc4bf582be44297dd1d021d3cb9148b4b22beb96240327ec4f55"} Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.764296 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" event={"ID":"1ec16618-1da7-467d-82fe-92d01c07ffdc","Type":"ContainerStarted","Data":"1dfc8fa943e1c15cfbc656f144e02bb9e52c3920d68fb9d569ff3e393b6fe625"} Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.765107 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.768113 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wdzg4" event={"ID":"29407244-fbfe-4d37-a33e-7d59df1c22fd","Type":"ContainerStarted","Data":"dca1527de84d7828a08c4784ea9a2138867e52e37a4f6a47625b14b11bc689e1"} Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.770053 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.791426 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" podStartSLOduration=7.791403453 podStartE2EDuration="7.791403453s" podCreationTimestamp="2026-01-27 18:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:51.787061178 +0000 UTC m=+194.249604061" watchObservedRunningTime="2026-01-27 18:45:51.791403453 +0000 UTC m=+194.253946336" Jan 27 18:45:51 crc kubenswrapper[4853]: I0127 18:45:51.807686 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wdzg4" podStartSLOduration=168.807672083 podStartE2EDuration="2m48.807672083s" podCreationTimestamp="2026-01-27 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:51.805521521 +0000 UTC m=+194.268064404" watchObservedRunningTime="2026-01-27 18:45:51.807672083 +0000 UTC m=+194.270214966" Jan 27 18:45:52 crc kubenswrapper[4853]: I0127 18:45:52.121256 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58200e7b-a0e9-47ba-8581-42878da87f40" path="/var/lib/kubelet/pods/58200e7b-a0e9-47ba-8581-42878da87f40/volumes" Jan 27 18:45:52 crc kubenswrapper[4853]: I0127 18:45:52.123384 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e624c88-c6c1-4c35-985b-264173a9abcd" path="/var/lib/kubelet/pods/6e624c88-c6c1-4c35-985b-264173a9abcd/volumes" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.003757 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.101523 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a823121-5107-4a97-a876-604a1cbd7ff9-kubelet-dir\") pod \"5a823121-5107-4a97-a876-604a1cbd7ff9\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.101575 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a823121-5107-4a97-a876-604a1cbd7ff9-kube-api-access\") pod \"5a823121-5107-4a97-a876-604a1cbd7ff9\" (UID: \"5a823121-5107-4a97-a876-604a1cbd7ff9\") " Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.101703 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a823121-5107-4a97-a876-604a1cbd7ff9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a823121-5107-4a97-a876-604a1cbd7ff9" (UID: "5a823121-5107-4a97-a876-604a1cbd7ff9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.102035 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a823121-5107-4a97-a876-604a1cbd7ff9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.109625 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a823121-5107-4a97-a876-604a1cbd7ff9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a823121-5107-4a97-a876-604a1cbd7ff9" (UID: "5a823121-5107-4a97-a876-604a1cbd7ff9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.203348 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a823121-5107-4a97-a876-604a1cbd7ff9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.306292 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q"] Jan 27 18:45:53 crc kubenswrapper[4853]: E0127 18:45:53.306814 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a823121-5107-4a97-a876-604a1cbd7ff9" containerName="pruner" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.306827 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a823121-5107-4a97-a876-604a1cbd7ff9" containerName="pruner" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.306962 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a823121-5107-4a97-a876-604a1cbd7ff9" containerName="pruner" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.307569 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.310378 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.310622 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.310660 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.310737 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.311637 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.311793 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.325857 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q"] Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.405839 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-config\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.405913 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdd2s\" (UniqueName: \"kubernetes.io/projected/2a7064ac-0ea9-445e-a77c-d8960b0126e4-kube-api-access-pdd2s\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.405963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7064ac-0ea9-445e-a77c-d8960b0126e4-serving-cert\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.405991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-client-ca\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.507604 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-config\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.507696 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdd2s\" (UniqueName: \"kubernetes.io/projected/2a7064ac-0ea9-445e-a77c-d8960b0126e4-kube-api-access-pdd2s\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.507734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7064ac-0ea9-445e-a77c-d8960b0126e4-serving-cert\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.507762 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-client-ca\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.508761 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-client-ca\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.508931 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-config\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.513073 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7064ac-0ea9-445e-a77c-d8960b0126e4-serving-cert\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.527840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdd2s\" (UniqueName: \"kubernetes.io/projected/2a7064ac-0ea9-445e-a77c-d8960b0126e4-kube-api-access-pdd2s\") pod \"route-controller-manager-58c86fd5c-rp76q\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.635040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.785677 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.786196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a823121-5107-4a97-a876-604a1cbd7ff9","Type":"ContainerDied","Data":"c3b8aba6b62d930414f5bc1f5cba9f43e00059cabc5e16753a44bc6e1e7b1945"} Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.786244 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b8aba6b62d930414f5bc1f5cba9f43e00059cabc5e16753a44bc6e1e7b1945" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.831622 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q"] Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.852735 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.854520 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.859450 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.861653 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.870405 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.913188 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.913250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-var-lock\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:53 crc kubenswrapper[4853]: I0127 18:45:53.913308 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf6208ed-203e-4fa0-8575-f593041cbc69-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.014367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.014434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-var-lock\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.014485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf6208ed-203e-4fa0-8575-f593041cbc69-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.014489 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-var-lock\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.014482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.033935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf6208ed-203e-4fa0-8575-f593041cbc69-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.190593 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.650103 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:45:54 crc kubenswrapper[4853]: W0127 18:45:54.661934 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf6208ed_203e_4fa0_8575_f593041cbc69.slice/crio-eafa2f2f311b5eec04f21483ccac0fc6f07385388df15b896e775d379a24f5fc WatchSource:0}: Error finding container eafa2f2f311b5eec04f21483ccac0fc6f07385388df15b896e775d379a24f5fc: Status 404 returned error can't find the container with id eafa2f2f311b5eec04f21483ccac0fc6f07385388df15b896e775d379a24f5fc Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.791717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" event={"ID":"2a7064ac-0ea9-445e-a77c-d8960b0126e4","Type":"ContainerStarted","Data":"fe73107c38891ee8eb9e38d7f6bdec3420487c2ea86a9d2fc2298e224a3f054b"} Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.791761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" event={"ID":"2a7064ac-0ea9-445e-a77c-d8960b0126e4","Type":"ContainerStarted","Data":"34e712a1105d575a127856ce5ec1d08b18f6ef84d0379deee91db5542f92552b"} Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.792110 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.792976 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf6208ed-203e-4fa0-8575-f593041cbc69","Type":"ContainerStarted","Data":"eafa2f2f311b5eec04f21483ccac0fc6f07385388df15b896e775d379a24f5fc"} Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.798270 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:45:54 crc kubenswrapper[4853]: I0127 18:45:54.809864 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" podStartSLOduration=10.8098441 podStartE2EDuration="10.8098441s" podCreationTimestamp="2026-01-27 18:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:54.806921728 +0000 UTC m=+197.269464611" watchObservedRunningTime="2026-01-27 18:45:54.8098441 +0000 UTC m=+197.272386983" Jan 27 18:45:55 crc kubenswrapper[4853]: I0127 18:45:55.800183 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf6208ed-203e-4fa0-8575-f593041cbc69","Type":"ContainerStarted","Data":"60496495b629bcbbcf0f7f9087ca93e690ee49760ce9f3e3e5ff32d951d36f9f"} Jan 27 18:45:55 crc kubenswrapper[4853]: I0127 18:45:55.815903 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.815888351 podStartE2EDuration="2.815888351s" podCreationTimestamp="2026-01-27 18:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:55.813710099 +0000 UTC m=+198.276252982" watchObservedRunningTime="2026-01-27 18:45:55.815888351 +0000 UTC m=+198.278431234" Jan 27 18:45:58 crc kubenswrapper[4853]: I0127 18:45:58.820915 4853 generic.go:334] "Generic (PLEG): container finished" podID="bdf3b7ad-3545-4192-941a-862154002694" containerID="297f43caef6ac9df030327cc94feecc229a79f511cd91d6dcf4013df9070632b" exitCode=0 Jan 27 18:45:58 crc kubenswrapper[4853]: I0127 18:45:58.821031 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerDied","Data":"297f43caef6ac9df030327cc94feecc229a79f511cd91d6dcf4013df9070632b"} Jan 27 18:45:59 crc kubenswrapper[4853]: I0127 18:45:59.830403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerStarted","Data":"d3a74321129291ab923303682d43d24ad99ae4b013da74c710ace8aeb5c80209"} Jan 27 18:45:59 crc kubenswrapper[4853]: I0127 18:45:59.856662 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ck67k" podStartSLOduration=1.743351461 podStartE2EDuration="53.856643481s" podCreationTimestamp="2026-01-27 18:45:06 +0000 UTC" firstStartedPulling="2026-01-27 18:45:07.238884074 +0000 UTC m=+149.701426957" lastFinishedPulling="2026-01-27 18:45:59.352176094 +0000 UTC m=+201.814718977" observedRunningTime="2026-01-27 18:45:59.852415732 +0000 UTC m=+202.314958635" watchObservedRunningTime="2026-01-27 18:45:59.856643481 +0000 UTC m=+202.319186364" Jan 27 18:46:03 crc kubenswrapper[4853]: I0127 18:46:03.856850 4853 generic.go:334] "Generic (PLEG): container finished" podID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerID="eb33bb3327a8312fefc5922484033c0a6453950bd036dfe311e4100d716e1f9c" exitCode=0 Jan 27 18:46:03 crc kubenswrapper[4853]: I0127 18:46:03.856924 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pfch" event={"ID":"8bb82662-0739-432b-93b0-c5f1bc3ed268","Type":"ContainerDied","Data":"eb33bb3327a8312fefc5922484033c0a6453950bd036dfe311e4100d716e1f9c"} Jan 27 18:46:04 crc kubenswrapper[4853]: I0127 18:46:04.864488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerStarted","Data":"b17ab6daa8bf6d7c69d7898906157eb86f2be9c909d172401fedd76ffde2f25c"} Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.541448 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.541517 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.541572 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.542204 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.542274 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a" gracePeriod=600 Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.870842 4853 generic.go:334] "Generic (PLEG): container finished" podID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerID="b17ab6daa8bf6d7c69d7898906157eb86f2be9c909d172401fedd76ffde2f25c" exitCode=0 Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.870994 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerDied","Data":"b17ab6daa8bf6d7c69d7898906157eb86f2be9c909d172401fedd76ffde2f25c"} Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.874970 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a" exitCode=0 Jan 27 18:46:05 crc kubenswrapper[4853]: I0127 18:46:05.875010 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a"} Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.450017 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.450439 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.882565 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"c86af0fa16b8eb47abbf6eaa7c300570ebb612ee72008ab7450dc2bae5e201f2"} Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.884243 4853 generic.go:334] "Generic (PLEG): container finished" podID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerID="efedee8e116b7f4b30a5bea959e50f5f6a01d3e619d48e9ba45c7b3cf8108006" exitCode=0 Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.884304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerDied","Data":"efedee8e116b7f4b30a5bea959e50f5f6a01d3e619d48e9ba45c7b3cf8108006"} Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.886945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerStarted","Data":"ef6077856b245f84d303957eab4c30030d5426f1fb6326120cedb8704518ab67"} Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.889827 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pfch" event={"ID":"8bb82662-0739-432b-93b0-c5f1bc3ed268","Type":"ContainerStarted","Data":"1b765d232abc7ab29eef15331e9e4e3f5601a5cc4b0be31beb5f0a2c97ff5f82"} Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.952620 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8pfch" podStartSLOduration=3.290358418 podStartE2EDuration="1m0.952603154s" podCreationTimestamp="2026-01-27 18:45:06 +0000 UTC" firstStartedPulling="2026-01-27 18:45:08.261283652 +0000 UTC m=+150.723826545" lastFinishedPulling="2026-01-27 18:46:05.923528398 +0000 UTC m=+208.386071281" observedRunningTime="2026-01-27 18:46:06.950459864 +0000 UTC m=+209.413002747" watchObservedRunningTime="2026-01-27 18:46:06.952603154 +0000 UTC m=+209.415146037" Jan 27 18:46:06 crc kubenswrapper[4853]: I0127 18:46:06.989281 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:46:07 crc kubenswrapper[4853]: I0127 18:46:07.043783 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:46:07 crc kubenswrapper[4853]: I0127 18:46:07.898064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerStarted","Data":"4349c7e838986001ccb370b67833b0fb8fc25ee79a84bc9cb42f509044c70b20"} Jan 27 18:46:07 crc kubenswrapper[4853]: I0127 18:46:07.902567 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerStarted","Data":"7cdacbe95301b2962805efbf942ff66b51d4c0cfed6f7932e82ab731bb09fc3d"} Jan 27 18:46:07 crc kubenswrapper[4853]: I0127 18:46:07.904468 4853 generic.go:334] "Generic (PLEG): container finished" podID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerID="ef6077856b245f84d303957eab4c30030d5426f1fb6326120cedb8704518ab67" exitCode=0 Jan 27 18:46:07 crc kubenswrapper[4853]: I0127 18:46:07.904555 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerDied","Data":"ef6077856b245f84d303957eab4c30030d5426f1fb6326120cedb8704518ab67"} Jan 27 18:46:07 crc kubenswrapper[4853]: I0127 18:46:07.920883 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vz77p" podStartSLOduration=2.723891083 podStartE2EDuration="58.920864514s" podCreationTimestamp="2026-01-27 18:45:09 +0000 UTC" firstStartedPulling="2026-01-27 18:45:10.439831559 +0000 UTC m=+152.902374442" lastFinishedPulling="2026-01-27 18:46:06.63680499 +0000 UTC m=+209.099347873" observedRunningTime="2026-01-27 18:46:07.920475523 +0000 UTC m=+210.383018416" watchObservedRunningTime="2026-01-27 18:46:07.920864514 +0000 UTC m=+210.383407397" Jan 27 18:46:08 crc kubenswrapper[4853]: I0127 18:46:08.910553 4853 generic.go:334] "Generic (PLEG): container finished" podID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerID="7cdacbe95301b2962805efbf942ff66b51d4c0cfed6f7932e82ab731bb09fc3d" exitCode=0 Jan 27 18:46:08 crc kubenswrapper[4853]: I0127 18:46:08.910720 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerDied","Data":"7cdacbe95301b2962805efbf942ff66b51d4c0cfed6f7932e82ab731bb09fc3d"} Jan 27 18:46:09 crc kubenswrapper[4853]: I0127 18:46:09.421371 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:46:09 crc kubenswrapper[4853]: I0127 18:46:09.421424 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:46:10 crc kubenswrapper[4853]: I0127 18:46:10.470183 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vz77p" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="registry-server" probeResult="failure" output=< Jan 27 18:46:10 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 18:46:10 crc kubenswrapper[4853]: > Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.018496 4853 generic.go:334] "Generic (PLEG): container finished" podID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerID="8c3aca04783c51d204b480d9f53b1a254a451cf3fe7c1f9b6edcc3fdf458d4d6" exitCode=0 Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.018549 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pflg" event={"ID":"a442dc5b-e830-490b-8ad1-6a6606fea52b","Type":"ContainerDied","Data":"8c3aca04783c51d204b480d9f53b1a254a451cf3fe7c1f9b6edcc3fdf458d4d6"} Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.022576 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerStarted","Data":"571e9508d2976939513b3ffa1941f5e2601ba967981e9f77072abae9a5820c33"} Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.024970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerStarted","Data":"97e225e321180d2d3d03e15916750160844e868919e814d54c73cf4ea43e6089"} Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.833407 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.833464 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:46:16 crc kubenswrapper[4853]: I0127 18:46:16.884687 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:46:17 crc kubenswrapper[4853]: I0127 18:46:17.074493 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:46:17 crc kubenswrapper[4853]: I0127 18:46:17.078097 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-752jj" podStartSLOduration=5.193763229 podStartE2EDuration="1m8.078076632s" podCreationTimestamp="2026-01-27 18:45:09 +0000 UTC" firstStartedPulling="2026-01-27 18:45:10.414006113 +0000 UTC m=+152.876548986" lastFinishedPulling="2026-01-27 18:46:13.298319506 +0000 UTC m=+215.760862389" observedRunningTime="2026-01-27 18:46:17.055540679 +0000 UTC m=+219.518083552" watchObservedRunningTime="2026-01-27 18:46:17.078076632 +0000 UTC m=+219.540619515" Jan 27 18:46:17 crc kubenswrapper[4853]: I0127 18:46:17.078327 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-chjrw" podStartSLOduration=8.304005626 podStartE2EDuration="1m9.078321259s" podCreationTimestamp="2026-01-27 18:45:08 +0000 UTC" firstStartedPulling="2026-01-27 18:45:09.359449196 +0000 UTC m=+151.821992079" lastFinishedPulling="2026-01-27 18:46:10.133764809 +0000 UTC m=+212.596307712" observedRunningTime="2026-01-27 18:46:17.07443654 +0000 UTC m=+219.536979443" watchObservedRunningTime="2026-01-27 18:46:17.078321259 +0000 UTC m=+219.540864142" Jan 27 18:46:18 crc kubenswrapper[4853]: I0127 18:46:18.375455 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:46:18 crc kubenswrapper[4853]: I0127 18:46:18.375831 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:46:18 crc kubenswrapper[4853]: I0127 18:46:18.414836 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:46:19 crc kubenswrapper[4853]: I0127 18:46:19.457438 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pfch"] Jan 27 18:46:19 crc kubenswrapper[4853]: I0127 18:46:19.458046 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8pfch" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="registry-server" containerID="cri-o://1b765d232abc7ab29eef15331e9e4e3f5601a5cc4b0be31beb5f0a2c97ff5f82" gracePeriod=2 Jan 27 18:46:19 crc kubenswrapper[4853]: I0127 18:46:19.490325 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:46:19 crc kubenswrapper[4853]: I0127 18:46:19.536496 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:46:19 crc kubenswrapper[4853]: I0127 18:46:19.869955 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:46:19 crc kubenswrapper[4853]: I0127 18:46:19.870058 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:46:20 crc kubenswrapper[4853]: I0127 18:46:20.048203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerStarted","Data":"b1c23ac34259cf3dcd69218e17baa02d64f2b8f68bc7b19ce1cea5151d1d9f23"} Jan 27 18:46:20 crc kubenswrapper[4853]: I0127 18:46:20.089856 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:46:20 crc kubenswrapper[4853]: I0127 18:46:20.105740 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p74jj" podStartSLOduration=4.099453131 podStartE2EDuration="1m15.105720653s" podCreationTimestamp="2026-01-27 18:45:05 +0000 UTC" firstStartedPulling="2026-01-27 18:45:07.24255817 +0000 UTC m=+149.705101053" lastFinishedPulling="2026-01-27 18:46:18.248825692 +0000 UTC m=+220.711368575" observedRunningTime="2026-01-27 18:46:20.071772379 +0000 UTC m=+222.534315262" watchObservedRunningTime="2026-01-27 18:46:20.105720653 +0000 UTC m=+222.568263526" Jan 27 18:46:20 crc kubenswrapper[4853]: I0127 18:46:20.934607 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-752jj" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="registry-server" probeResult="failure" output=< Jan 27 18:46:20 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 18:46:20 crc kubenswrapper[4853]: > Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.054394 4853 generic.go:334] "Generic (PLEG): container finished" podID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerID="1b765d232abc7ab29eef15331e9e4e3f5601a5cc4b0be31beb5f0a2c97ff5f82" exitCode=0 Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.054475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pfch" event={"ID":"8bb82662-0739-432b-93b0-c5f1bc3ed268","Type":"ContainerDied","Data":"1b765d232abc7ab29eef15331e9e4e3f5601a5cc4b0be31beb5f0a2c97ff5f82"} Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.874789 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.973428 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss6mn\" (UniqueName: \"kubernetes.io/projected/8bb82662-0739-432b-93b0-c5f1bc3ed268-kube-api-access-ss6mn\") pod \"8bb82662-0739-432b-93b0-c5f1bc3ed268\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.973584 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-utilities\") pod \"8bb82662-0739-432b-93b0-c5f1bc3ed268\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.973786 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-catalog-content\") pod \"8bb82662-0739-432b-93b0-c5f1bc3ed268\" (UID: \"8bb82662-0739-432b-93b0-c5f1bc3ed268\") " Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.974530 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-utilities" (OuterVolumeSpecName: "utilities") pod "8bb82662-0739-432b-93b0-c5f1bc3ed268" (UID: "8bb82662-0739-432b-93b0-c5f1bc3ed268"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:21 crc kubenswrapper[4853]: I0127 18:46:21.993396 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb82662-0739-432b-93b0-c5f1bc3ed268-kube-api-access-ss6mn" (OuterVolumeSpecName: "kube-api-access-ss6mn") pod "8bb82662-0739-432b-93b0-c5f1bc3ed268" (UID: "8bb82662-0739-432b-93b0-c5f1bc3ed268"). InnerVolumeSpecName "kube-api-access-ss6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.062997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pfch" event={"ID":"8bb82662-0739-432b-93b0-c5f1bc3ed268","Type":"ContainerDied","Data":"9c89fa2d24305310c99dfe064228cd2a51eebee0d40d21438078536660cc600d"} Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.063076 4853 scope.go:117] "RemoveContainer" containerID="1b765d232abc7ab29eef15331e9e4e3f5601a5cc4b0be31beb5f0a2c97ff5f82" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.063075 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pfch" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.075758 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.075793 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss6mn\" (UniqueName: \"kubernetes.io/projected/8bb82662-0739-432b-93b0-c5f1bc3ed268-kube-api-access-ss6mn\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.959620 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bb82662-0739-432b-93b0-c5f1bc3ed268" (UID: "8bb82662-0739-432b-93b0-c5f1bc3ed268"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.968623 4853 scope.go:117] "RemoveContainer" containerID="eb33bb3327a8312fefc5922484033c0a6453950bd036dfe311e4100d716e1f9c" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.990499 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb82662-0739-432b-93b0-c5f1bc3ed268-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.994084 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pfch"] Jan 27 18:46:22 crc kubenswrapper[4853]: I0127 18:46:22.997554 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8pfch"] Jan 27 18:46:24 crc kubenswrapper[4853]: I0127 18:46:24.121905 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" path="/var/lib/kubelet/pods/8bb82662-0739-432b-93b0-c5f1bc3ed268/volumes" Jan 27 18:46:24 crc kubenswrapper[4853]: I0127 18:46:24.386869 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-767fcbd767-fj52r"] Jan 27 18:46:24 crc kubenswrapper[4853]: I0127 18:46:24.387102 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" podUID="1ec16618-1da7-467d-82fe-92d01c07ffdc" containerName="controller-manager" containerID="cri-o://efb4e04340fedc4bf582be44297dd1d021d3cb9148b4b22beb96240327ec4f55" gracePeriod=30 Jan 27 18:46:24 crc kubenswrapper[4853]: I0127 18:46:24.512705 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q"] Jan 27 18:46:24 crc kubenswrapper[4853]: I0127 18:46:24.512900 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" podUID="2a7064ac-0ea9-445e-a77c-d8960b0126e4" containerName="route-controller-manager" containerID="cri-o://fe73107c38891ee8eb9e38d7f6bdec3420487c2ea86a9d2fc2298e224a3f054b" gracePeriod=30 Jan 27 18:46:25 crc kubenswrapper[4853]: I0127 18:46:25.087448 4853 generic.go:334] "Generic (PLEG): container finished" podID="1ec16618-1da7-467d-82fe-92d01c07ffdc" containerID="efb4e04340fedc4bf582be44297dd1d021d3cb9148b4b22beb96240327ec4f55" exitCode=0 Jan 27 18:46:25 crc kubenswrapper[4853]: I0127 18:46:25.087501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" event={"ID":"1ec16618-1da7-467d-82fe-92d01c07ffdc","Type":"ContainerDied","Data":"efb4e04340fedc4bf582be44297dd1d021d3cb9148b4b22beb96240327ec4f55"} Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.094460 4853 generic.go:334] "Generic (PLEG): container finished" podID="2a7064ac-0ea9-445e-a77c-d8960b0126e4" containerID="fe73107c38891ee8eb9e38d7f6bdec3420487c2ea86a9d2fc2298e224a3f054b" exitCode=0 Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.094544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" event={"ID":"2a7064ac-0ea9-445e-a77c-d8960b0126e4","Type":"ContainerDied","Data":"fe73107c38891ee8eb9e38d7f6bdec3420487c2ea86a9d2fc2298e224a3f054b"} Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.211717 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.211795 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.270329 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.294840 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324310 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-859bc87fd4-kspvl"] Jan 27 18:46:26 crc kubenswrapper[4853]: E0127 18:46:26.324558 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="extract-content" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324576 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="extract-content" Jan 27 18:46:26 crc kubenswrapper[4853]: E0127 18:46:26.324591 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="extract-utilities" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324599 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="extract-utilities" Jan 27 18:46:26 crc kubenswrapper[4853]: E0127 18:46:26.324611 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="registry-server" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324620 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="registry-server" Jan 27 18:46:26 crc kubenswrapper[4853]: E0127 18:46:26.324630 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec16618-1da7-467d-82fe-92d01c07ffdc" containerName="controller-manager" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324637 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec16618-1da7-467d-82fe-92d01c07ffdc" containerName="controller-manager" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324762 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb82662-0739-432b-93b0-c5f1bc3ed268" containerName="registry-server" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.324783 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec16618-1da7-467d-82fe-92d01c07ffdc" containerName="controller-manager" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.325149 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.334639 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-859bc87fd4-kspvl"] Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.432564 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-config\") pod \"1ec16618-1da7-467d-82fe-92d01c07ffdc\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.432634 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-client-ca\") pod \"1ec16618-1da7-467d-82fe-92d01c07ffdc\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.432659 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-proxy-ca-bundles\") pod \"1ec16618-1da7-467d-82fe-92d01c07ffdc\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.432696 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec16618-1da7-467d-82fe-92d01c07ffdc-serving-cert\") pod \"1ec16618-1da7-467d-82fe-92d01c07ffdc\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.432757 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbc2z\" (UniqueName: \"kubernetes.io/projected/1ec16618-1da7-467d-82fe-92d01c07ffdc-kube-api-access-cbc2z\") pod \"1ec16618-1da7-467d-82fe-92d01c07ffdc\" (UID: \"1ec16618-1da7-467d-82fe-92d01c07ffdc\") " Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.432969 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a567feea-5f51-4a2b-910e-37850da7cdfe-serving-cert\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bms\" (UniqueName: \"kubernetes.io/projected/a567feea-5f51-4a2b-910e-37850da7cdfe-kube-api-access-n5bms\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433341 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1ec16618-1da7-467d-82fe-92d01c07ffdc" (UID: "1ec16618-1da7-467d-82fe-92d01c07ffdc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433370 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ec16618-1da7-467d-82fe-92d01c07ffdc" (UID: "1ec16618-1da7-467d-82fe-92d01c07ffdc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433386 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-client-ca\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-config\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-config" (OuterVolumeSpecName: "config") pod "1ec16618-1da7-467d-82fe-92d01c07ffdc" (UID: "1ec16618-1da7-467d-82fe-92d01c07ffdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433643 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-proxy-ca-bundles\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433771 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433796 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.433809 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ec16618-1da7-467d-82fe-92d01c07ffdc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.439635 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec16618-1da7-467d-82fe-92d01c07ffdc-kube-api-access-cbc2z" (OuterVolumeSpecName: "kube-api-access-cbc2z") pod "1ec16618-1da7-467d-82fe-92d01c07ffdc" (UID: "1ec16618-1da7-467d-82fe-92d01c07ffdc"). InnerVolumeSpecName "kube-api-access-cbc2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.439858 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec16618-1da7-467d-82fe-92d01c07ffdc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ec16618-1da7-467d-82fe-92d01c07ffdc" (UID: "1ec16618-1da7-467d-82fe-92d01c07ffdc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.534763 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5bms\" (UniqueName: \"kubernetes.io/projected/a567feea-5f51-4a2b-910e-37850da7cdfe-kube-api-access-n5bms\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.534835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-client-ca\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.534861 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-config\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.534905 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-proxy-ca-bundles\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.534938 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a567feea-5f51-4a2b-910e-37850da7cdfe-serving-cert\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.535023 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec16618-1da7-467d-82fe-92d01c07ffdc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.535040 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbc2z\" (UniqueName: \"kubernetes.io/projected/1ec16618-1da7-467d-82fe-92d01c07ffdc-kube-api-access-cbc2z\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.536209 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-client-ca\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.536520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-config\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.536827 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-proxy-ca-bundles\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.540536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a567feea-5f51-4a2b-910e-37850da7cdfe-serving-cert\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.555727 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5bms\" (UniqueName: \"kubernetes.io/projected/a567feea-5f51-4a2b-910e-37850da7cdfe-kube-api-access-n5bms\") pod \"controller-manager-859bc87fd4-kspvl\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.640320 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:26 crc kubenswrapper[4853]: I0127 18:46:26.922619 4853 scope.go:117] "RemoveContainer" containerID="517754bd0b507f4f4bd3ef8a74b4be1f376c19629f4f376548b119c6d1b1ef97" Jan 27 18:46:27 crc kubenswrapper[4853]: I0127 18:46:27.101258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" event={"ID":"1ec16618-1da7-467d-82fe-92d01c07ffdc","Type":"ContainerDied","Data":"1dfc8fa943e1c15cfbc656f144e02bb9e52c3920d68fb9d569ff3e393b6fe625"} Jan 27 18:46:27 crc kubenswrapper[4853]: I0127 18:46:27.101364 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767fcbd767-fj52r" Jan 27 18:46:27 crc kubenswrapper[4853]: I0127 18:46:27.128574 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-767fcbd767-fj52r"] Jan 27 18:46:27 crc kubenswrapper[4853]: I0127 18:46:27.133429 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-767fcbd767-fj52r"] Jan 27 18:46:27 crc kubenswrapper[4853]: I0127 18:46:27.144893 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:46:27 crc kubenswrapper[4853]: I0127 18:46:27.958683 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.054488 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-client-ca\") pod \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.054629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdd2s\" (UniqueName: \"kubernetes.io/projected/2a7064ac-0ea9-445e-a77c-d8960b0126e4-kube-api-access-pdd2s\") pod \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.054665 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7064ac-0ea9-445e-a77c-d8960b0126e4-serving-cert\") pod \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.054681 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-config\") pod \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\" (UID: \"2a7064ac-0ea9-445e-a77c-d8960b0126e4\") " Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.056340 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a7064ac-0ea9-445e-a77c-d8960b0126e4" (UID: "2a7064ac-0ea9-445e-a77c-d8960b0126e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.056678 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-config" (OuterVolumeSpecName: "config") pod "2a7064ac-0ea9-445e-a77c-d8960b0126e4" (UID: "2a7064ac-0ea9-445e-a77c-d8960b0126e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.062384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7064ac-0ea9-445e-a77c-d8960b0126e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a7064ac-0ea9-445e-a77c-d8960b0126e4" (UID: "2a7064ac-0ea9-445e-a77c-d8960b0126e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.062485 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7064ac-0ea9-445e-a77c-d8960b0126e4-kube-api-access-pdd2s" (OuterVolumeSpecName: "kube-api-access-pdd2s") pod "2a7064ac-0ea9-445e-a77c-d8960b0126e4" (UID: "2a7064ac-0ea9-445e-a77c-d8960b0126e4"). InnerVolumeSpecName "kube-api-access-pdd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.115306 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.120011 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec16618-1da7-467d-82fe-92d01c07ffdc" path="/var/lib/kubelet/pods/1ec16618-1da7-467d-82fe-92d01c07ffdc/volumes" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.120627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q" event={"ID":"2a7064ac-0ea9-445e-a77c-d8960b0126e4","Type":"ContainerDied","Data":"34e712a1105d575a127856ce5ec1d08b18f6ef84d0379deee91db5542f92552b"} Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.148885 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q"] Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.151802 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c86fd5c-rp76q"] Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.155907 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7064ac-0ea9-445e-a77c-d8960b0126e4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.155979 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.155994 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a7064ac-0ea9-445e-a77c-d8960b0126e4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.156009 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdd2s\" (UniqueName: \"kubernetes.io/projected/2a7064ac-0ea9-445e-a77c-d8960b0126e4-kube-api-access-pdd2s\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.406149 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v"] Jan 27 18:46:28 crc kubenswrapper[4853]: E0127 18:46:28.406376 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7064ac-0ea9-445e-a77c-d8960b0126e4" containerName="route-controller-manager" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.406387 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7064ac-0ea9-445e-a77c-d8960b0126e4" containerName="route-controller-manager" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.406494 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7064ac-0ea9-445e-a77c-d8960b0126e4" containerName="route-controller-manager" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.406843 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.408628 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.410629 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.411765 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.411929 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.412050 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.412200 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.418160 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v"] Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.500242 4853 scope.go:117] "RemoveContainer" containerID="efb4e04340fedc4bf582be44297dd1d021d3cb9148b4b22beb96240327ec4f55" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.562755 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfzz\" (UniqueName: \"kubernetes.io/projected/682557db-b9bc-4004-a882-d2120d0ad94d-kube-api-access-kqfzz\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.562802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-config\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.562834 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/682557db-b9bc-4004-a882-d2120d0ad94d-serving-cert\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.563046 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-client-ca\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.600262 4853 scope.go:117] "RemoveContainer" containerID="fe73107c38891ee8eb9e38d7f6bdec3420487c2ea86a9d2fc2298e224a3f054b" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.664854 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-client-ca\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.665383 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfzz\" (UniqueName: \"kubernetes.io/projected/682557db-b9bc-4004-a882-d2120d0ad94d-kube-api-access-kqfzz\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.665412 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-config\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.665443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/682557db-b9bc-4004-a882-d2120d0ad94d-serving-cert\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.666753 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-client-ca\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.668738 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-config\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.676856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/682557db-b9bc-4004-a882-d2120d0ad94d-serving-cert\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.685288 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfzz\" (UniqueName: \"kubernetes.io/projected/682557db-b9bc-4004-a882-d2120d0ad94d-kube-api-access-kqfzz\") pod \"route-controller-manager-655d4d6c6d-tn75v\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.727714 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.735493 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-859bc87fd4-kspvl"] Jan 27 18:46:28 crc kubenswrapper[4853]: W0127 18:46:28.743583 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda567feea_5f51_4a2b_910e_37850da7cdfe.slice/crio-d03d51c208e9dc4fc5e655a8100fc7863407251a8cb81bc628fa4eda416bb825 WatchSource:0}: Error finding container d03d51c208e9dc4fc5e655a8100fc7863407251a8cb81bc628fa4eda416bb825: Status 404 returned error can't find the container with id d03d51c208e9dc4fc5e655a8100fc7863407251a8cb81bc628fa4eda416bb825 Jan 27 18:46:28 crc kubenswrapper[4853]: I0127 18:46:28.895980 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ltskb"] Jan 27 18:46:29 crc kubenswrapper[4853]: I0127 18:46:29.120429 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" event={"ID":"a567feea-5f51-4a2b-910e-37850da7cdfe","Type":"ContainerStarted","Data":"d03d51c208e9dc4fc5e655a8100fc7863407251a8cb81bc628fa4eda416bb825"} Jan 27 18:46:29 crc kubenswrapper[4853]: I0127 18:46:29.181296 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v"] Jan 27 18:46:29 crc kubenswrapper[4853]: I0127 18:46:29.917714 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:46:29 crc kubenswrapper[4853]: I0127 18:46:29.971786 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.126631 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a7064ac-0ea9-445e-a77c-d8960b0126e4" path="/var/lib/kubelet/pods/2a7064ac-0ea9-445e-a77c-d8960b0126e4/volumes" Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.131155 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pflg" event={"ID":"a442dc5b-e830-490b-8ad1-6a6606fea52b","Type":"ContainerStarted","Data":"c6628e44eabb7208ef73800dbc80762643db9112b7ef9236a2f9d25865b7af20"} Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.132478 4853 generic.go:334] "Generic (PLEG): container finished" podID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerID="e14a2396fb1956ff243f59dacf2e664aea23bc918ac6fe8cd5284c1cc0384a85" exitCode=0 Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.132546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxfb" event={"ID":"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1","Type":"ContainerDied","Data":"e14a2396fb1956ff243f59dacf2e664aea23bc918ac6fe8cd5284c1cc0384a85"} Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.134067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" event={"ID":"682557db-b9bc-4004-a882-d2120d0ad94d","Type":"ContainerStarted","Data":"6988a3d7761a06d1ce9f05c22299e8ecd76e734392523ee4e17991180802ba8a"} Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.134097 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" event={"ID":"682557db-b9bc-4004-a882-d2120d0ad94d","Type":"ContainerStarted","Data":"3ffc27c9b1dfb9fc5a639d651578d92cb86217098dfc928ce8b7f2422379eba2"} Jan 27 18:46:30 crc kubenswrapper[4853]: I0127 18:46:30.136071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" event={"ID":"a567feea-5f51-4a2b-910e-37850da7cdfe","Type":"ContainerStarted","Data":"2e0db576ada91682860112af8219949523f222fcb0abac8cb42c0386a28e24bb"} Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.141452 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.146076 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.162673 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" podStartSLOduration=7.162651165 podStartE2EDuration="7.162651165s" podCreationTimestamp="2026-01-27 18:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:46:31.154892017 +0000 UTC m=+233.617434910" watchObservedRunningTime="2026-01-27 18:46:31.162651165 +0000 UTC m=+233.625194048" Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.175672 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pflg" podStartSLOduration=4.922677109 podStartE2EDuration="1m25.17565254s" podCreationTimestamp="2026-01-27 18:45:06 +0000 UTC" firstStartedPulling="2026-01-27 18:45:08.247743991 +0000 UTC m=+150.710286874" lastFinishedPulling="2026-01-27 18:46:28.500719422 +0000 UTC m=+230.963262305" observedRunningTime="2026-01-27 18:46:31.174366934 +0000 UTC m=+233.636909837" watchObservedRunningTime="2026-01-27 18:46:31.17565254 +0000 UTC m=+233.638195423" Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.212658 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" podStartSLOduration=7.21263933 podStartE2EDuration="7.21263933s" podCreationTimestamp="2026-01-27 18:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:46:31.191907307 +0000 UTC m=+233.654450190" watchObservedRunningTime="2026-01-27 18:46:31.21263933 +0000 UTC m=+233.675182213" Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.650229 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-752jj"] Jan 27 18:46:31 crc kubenswrapper[4853]: I0127 18:46:31.650699 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-752jj" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="registry-server" containerID="cri-o://97e225e321180d2d3d03e15916750160844e868919e814d54c73cf4ea43e6089" gracePeriod=2 Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.595489 4853 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.595767 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba" gracePeriod=15 Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.595887 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7" gracePeriod=15 Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.595930 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69" gracePeriod=15 Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.595912 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5" gracePeriod=15 Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.596079 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666" gracePeriod=15 Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.597585 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.598076 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.598172 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.598284 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.598377 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.598472 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.598558 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.598650 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.598729 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.598821 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.598905 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.598986 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599061 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.599157 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599249 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:46:32 crc kubenswrapper[4853]: E0127 18:46:32.599357 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599438 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599646 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599756 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599848 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.599938 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.600029 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.600112 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.600483 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.604443 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.605462 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.613068 4853 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.675547 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718157 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718274 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718340 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718394 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718422 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.718444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820097 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820198 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820304 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820423 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820502 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.820934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.821057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.821084 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.821107 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.821149 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.821172 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:32 crc kubenswrapper[4853]: I0127 18:46:32.962504 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:33 crc kubenswrapper[4853]: E0127 18:46:33.000937 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eaae268a29e9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:46:32.998002331 +0000 UTC m=+235.460545214,LastTimestamp:2026-01-27 18:46:32.998002331 +0000 UTC m=+235.460545214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:46:33 crc kubenswrapper[4853]: I0127 18:46:33.163976 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 18:46:33 crc kubenswrapper[4853]: I0127 18:46:33.165359 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:33 crc kubenswrapper[4853]: I0127 18:46:33.166172 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69" exitCode=2 Jan 27 18:46:33 crc kubenswrapper[4853]: I0127 18:46:33.167514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ba106909d0134acb52b3dd018b7e4eb85567f8590fdb7ae894e9a37c0021735e"} Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.175523 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.177564 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.178603 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5" exitCode=0 Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.178633 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666" exitCode=0 Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.178646 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7" exitCode=0 Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.178713 4853 scope.go:117] "RemoveContainer" containerID="213668bb8a0478fe7fba2a64a45dd4aec0dfd6794a29977633b4e0fd1bcbe352" Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.181630 4853 generic.go:334] "Generic (PLEG): container finished" podID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerID="97e225e321180d2d3d03e15916750160844e868919e814d54c73cf4ea43e6089" exitCode=0 Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.181690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerDied","Data":"97e225e321180d2d3d03e15916750160844e868919e814d54c73cf4ea43e6089"} Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.183649 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf6208ed-203e-4fa0-8575-f593041cbc69" containerID="60496495b629bcbbcf0f7f9087ca93e690ee49760ce9f3e3e5ff32d951d36f9f" exitCode=0 Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.183684 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf6208ed-203e-4fa0-8575-f593041cbc69","Type":"ContainerDied","Data":"60496495b629bcbbcf0f7f9087ca93e690ee49760ce9f3e3e5ff32d951d36f9f"} Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.184322 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.184556 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:34 crc kubenswrapper[4853]: I0127 18:46:34.184992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4"} Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.192398 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.194500 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.194811 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.538204 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.539075 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.539393 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.539730 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.543796 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.544188 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.544505 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.544921 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.657375 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-kubelet-dir\") pod \"cf6208ed-203e-4fa0-8575-f593041cbc69\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.657543 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sptgm\" (UniqueName: \"kubernetes.io/projected/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-kube-api-access-sptgm\") pod \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.657586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf6208ed-203e-4fa0-8575-f593041cbc69" (UID: "cf6208ed-203e-4fa0-8575-f593041cbc69"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.658679 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-var-lock\") pod \"cf6208ed-203e-4fa0-8575-f593041cbc69\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.658734 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-catalog-content\") pod \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.658755 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-utilities\") pod \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\" (UID: \"5b61ecec-2b42-40ef-b2c5-d719cc45ab64\") " Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.658791 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf6208ed-203e-4fa0-8575-f593041cbc69-kube-api-access\") pod \"cf6208ed-203e-4fa0-8575-f593041cbc69\" (UID: \"cf6208ed-203e-4fa0-8575-f593041cbc69\") " Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.658792 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-var-lock" (OuterVolumeSpecName: "var-lock") pod "cf6208ed-203e-4fa0-8575-f593041cbc69" (UID: "cf6208ed-203e-4fa0-8575-f593041cbc69"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.659041 4853 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.659076 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf6208ed-203e-4fa0-8575-f593041cbc69-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.659715 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-utilities" (OuterVolumeSpecName: "utilities") pod "5b61ecec-2b42-40ef-b2c5-d719cc45ab64" (UID: "5b61ecec-2b42-40ef-b2c5-d719cc45ab64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.664073 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6208ed-203e-4fa0-8575-f593041cbc69-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf6208ed-203e-4fa0-8575-f593041cbc69" (UID: "cf6208ed-203e-4fa0-8575-f593041cbc69"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.664174 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-kube-api-access-sptgm" (OuterVolumeSpecName: "kube-api-access-sptgm") pod "5b61ecec-2b42-40ef-b2c5-d719cc45ab64" (UID: "5b61ecec-2b42-40ef-b2c5-d719cc45ab64"). InnerVolumeSpecName "kube-api-access-sptgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.759961 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sptgm\" (UniqueName: \"kubernetes.io/projected/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-kube-api-access-sptgm\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.760065 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:35 crc kubenswrapper[4853]: I0127 18:46:35.760080 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf6208ed-203e-4fa0-8575-f593041cbc69-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.138331 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b61ecec-2b42-40ef-b2c5-d719cc45ab64" (UID: "5b61ecec-2b42-40ef-b2c5-d719cc45ab64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.164552 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b61ecec-2b42-40ef-b2c5-d719cc45ab64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.201499 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.202327 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba" exitCode=0 Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.205921 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-752jj" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.206054 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-752jj" event={"ID":"5b61ecec-2b42-40ef-b2c5-d719cc45ab64","Type":"ContainerDied","Data":"437d194752a4964ccbdcb6d8bb8773e8069569b226f3cd707cfd786ad0519a43"} Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.206106 4853 scope.go:117] "RemoveContainer" containerID="97e225e321180d2d3d03e15916750160844e868919e814d54c73cf4ea43e6089" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.207079 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.207351 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.207682 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.207774 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf6208ed-203e-4fa0-8575-f593041cbc69","Type":"ContainerDied","Data":"eafa2f2f311b5eec04f21483ccac0fc6f07385388df15b896e775d379a24f5fc"} Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.207803 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eafa2f2f311b5eec04f21483ccac0fc6f07385388df15b896e775d379a24f5fc" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.207814 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.217343 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.217852 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.218481 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.220525 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.220691 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.221063 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.231185 4853 scope.go:117] "RemoveContainer" containerID="ef6077856b245f84d303957eab4c30030d5426f1fb6326120cedb8704518ab67" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.283238 4853 scope.go:117] "RemoveContainer" containerID="cd84f4eb7564ebb8ecd831a8d9000fe2d48b988c50d2b8918bcb9b7205bcfea6" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.710055 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.710134 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.748838 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.749392 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.749855 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.750329 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:36 crc kubenswrapper[4853]: I0127 18:46:36.750588 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: E0127 18:46:37.173084 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eaae268a29e9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:46:32.998002331 +0000 UTC m=+235.460545214,LastTimestamp:2026-01-27 18:46:32.998002331 +0000 UTC m=+235.460545214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.252229 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.253065 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.253720 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.253980 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.254340 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.836317 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.837269 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.837766 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.838112 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.838438 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.838725 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.839080 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.996673 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.996730 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.996760 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.996904 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.996965 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.996904 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.997683 4853 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.997720 4853 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:37 crc kubenswrapper[4853]: I0127 18:46:37.997736 4853 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.115292 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.115703 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.115873 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.115967 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.116093 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.116176 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.116307 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.116374 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.116520 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.116684 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.116711 4853 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.116886 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.121666 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.227190 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.228556 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.228834 4853 scope.go:117] "RemoveContainer" containerID="ae13abce33d48960f367ee4160e730c0f88cd877bd0d615cecac63d2a35b8cc5" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.229303 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.229726 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.230276 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.230507 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.230701 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.231784 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.232005 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.232218 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.232420 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.232746 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.317889 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.610954 4853 scope.go:117] "RemoveContainer" containerID="0058055ea5c0242bdb042c45a00203d344383e845de280c81bc7695416563666" Jan 27 18:46:38 crc kubenswrapper[4853]: E0127 18:46:38.719155 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.728835 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.733727 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.734043 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.734280 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.734501 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.734737 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.734935 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:38 crc kubenswrapper[4853]: I0127 18:46:38.735163 4853 status_manager.go:851] "Failed to get status for pod" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-655d4d6c6d-tn75v\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:39 crc kubenswrapper[4853]: I0127 18:46:39.237589 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:46:39 crc kubenswrapper[4853]: I0127 18:46:39.411514 4853 scope.go:117] "RemoveContainer" containerID="58d32f27b39964fe7195a0e25264b28a01c6c29753912edac25d07eef4f37cc7" Jan 27 18:46:39 crc kubenswrapper[4853]: I0127 18:46:39.428175 4853 scope.go:117] "RemoveContainer" containerID="a941071ee01b24389cae64dfc6cd851cde6b352121235ebf9e9bec77f8bcfb69" Jan 27 18:46:39 crc kubenswrapper[4853]: I0127 18:46:39.463463 4853 scope.go:117] "RemoveContainer" containerID="7cfb91267c8977bea938d9584cf8047ae6f6b4bff6a9d74c623f8903cd3416ba" Jan 27 18:46:39 crc kubenswrapper[4853]: I0127 18:46:39.478847 4853 scope.go:117] "RemoveContainer" containerID="00d05692e328aacfc251e3b583e1da6d1f4f677e93d07cec2aaa7aea5c3038cc" Jan 27 18:46:39 crc kubenswrapper[4853]: E0127 18:46:39.519859 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.247343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxfb" event={"ID":"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1","Type":"ContainerStarted","Data":"34dcd3e2524055eb7aaa2be37ab710bdfaf2c776d187d2aa036b644febf2be85"} Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.248160 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.248418 4853 status_manager.go:851] "Failed to get status for pod" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-655d4d6c6d-tn75v\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.248706 4853 status_manager.go:851] "Failed to get status for pod" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" pod="openshift-marketplace/redhat-marketplace-vwxfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vwxfb\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.249077 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.249468 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:40 crc kubenswrapper[4853]: I0127 18:46:40.249717 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:41 crc kubenswrapper[4853]: E0127 18:46:41.121714 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Jan 27 18:46:44 crc kubenswrapper[4853]: E0127 18:46:44.174841 4853 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" volumeName="registry-storage" Jan 27 18:46:44 crc kubenswrapper[4853]: E0127 18:46:44.322708 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="6.4s" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.111889 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.112745 4853 status_manager.go:851] "Failed to get status for pod" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" pod="openshift-marketplace/redhat-marketplace-vwxfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vwxfb\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.113041 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.113389 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.113664 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.113879 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.114262 4853 status_manager.go:851] "Failed to get status for pod" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-655d4d6c6d-tn75v\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.130049 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.130089 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:45 crc kubenswrapper[4853]: E0127 18:46:45.130543 4853 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.130968 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:45 crc kubenswrapper[4853]: W0127 18:46:45.157086 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6cb8b041e67fb4b3d99425f8acaf4a089499422f66ce6f0be5ef0326f2efef3e WatchSource:0}: Error finding container 6cb8b041e67fb4b3d99425f8acaf4a089499422f66ce6f0be5ef0326f2efef3e: Status 404 returned error can't find the container with id 6cb8b041e67fb4b3d99425f8acaf4a089499422f66ce6f0be5ef0326f2efef3e Jan 27 18:46:45 crc kubenswrapper[4853]: I0127 18:46:45.270843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6cb8b041e67fb4b3d99425f8acaf4a089499422f66ce6f0be5ef0326f2efef3e"} Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.277414 4853 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="094bb4dde1c760cdf382d63b3e89c7a29dab244ca5772ea2ab1ba65851f107b4" exitCode=0 Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.277717 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.277462 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"094bb4dde1c760cdf382d63b3e89c7a29dab244ca5772ea2ab1ba65851f107b4"} Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.277746 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.278166 4853 status_manager.go:851] "Failed to get status for pod" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" pod="openshift-marketplace/redhat-marketplace-vwxfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vwxfb\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: E0127 18:46:46.278166 4853 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.278506 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.278763 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.279019 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.279474 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.279759 4853 status_manager.go:851] "Failed to get status for pod" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-655d4d6c6d-tn75v\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.281586 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.281639 4853 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6" exitCode=1 Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.281670 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6"} Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.282098 4853 scope.go:117] "RemoveContainer" containerID="0ae32d32392bded0188e0b83c05a33e57dc10be293ddee84b8b12416d0e64fc6" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.282830 4853 status_manager.go:851] "Failed to get status for pod" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" pod="openshift-marketplace/redhat-marketplace-vwxfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vwxfb\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.283215 4853 status_manager.go:851] "Failed to get status for pod" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" pod="openshift-marketplace/redhat-operators-752jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-752jj\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.283545 4853 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.283784 4853 status_manager.go:851] "Failed to get status for pod" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.284037 4853 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.284367 4853 status_manager.go:851] "Failed to get status for pod" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" pod="openshift-marketplace/certified-operators-6pflg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6pflg\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.284622 4853 status_manager.go:851] "Failed to get status for pod" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-655d4d6c6d-tn75v\": dial tcp 38.102.83.174:6443: connect: connection refused" Jan 27 18:46:46 crc kubenswrapper[4853]: I0127 18:46:46.906402 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:47 crc kubenswrapper[4853]: I0127 18:46:47.289748 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:46:47 crc kubenswrapper[4853]: I0127 18:46:47.289839 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c809351984a9872cdceb711ef78316007374946cb0939c8832d188a131e6c5d6"} Jan 27 18:46:47 crc kubenswrapper[4853]: I0127 18:46:47.292720 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b3e113b833199e32f21ddc9493ef5255ff5b806a7c21794debd38406ab743fca"} Jan 27 18:46:47 crc kubenswrapper[4853]: I0127 18:46:47.292751 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"97286091c27727b2ec399e18e6e4e072c41335d907235d36d5b258a3a919f0f6"} Jan 27 18:46:47 crc kubenswrapper[4853]: I0127 18:46:47.292761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"adb2b4717f1f14e9bb783856b69aff88ec3a31378f247fd94f6f706257f1af66"} Jan 27 18:46:47 crc kubenswrapper[4853]: I0127 18:46:47.292769 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0a280074c4faf78db8a54e1112b0d3bb861579d33089e10f1a98c22f2efa1ff7"} Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.134421 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.302137 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7c7503ec83086f4ddb8188cd99460f717c50afa642ce834e302e65071ccd2cff"} Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.302329 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.302637 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.302665 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.776831 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.776884 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:46:48 crc kubenswrapper[4853]: I0127 18:46:48.830700 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:46:49 crc kubenswrapper[4853]: I0127 18:46:49.427534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:46:50 crc kubenswrapper[4853]: I0127 18:46:50.131429 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:50 crc kubenswrapper[4853]: I0127 18:46:50.131488 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:50 crc kubenswrapper[4853]: I0127 18:46:50.141314 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:53 crc kubenswrapper[4853]: I0127 18:46:53.310756 4853 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:53 crc kubenswrapper[4853]: I0127 18:46:53.411644 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:53 crc kubenswrapper[4853]: I0127 18:46:53.411681 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:53 crc kubenswrapper[4853]: I0127 18:46:53.415510 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:53 crc kubenswrapper[4853]: I0127 18:46:53.418754 4853 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a6fda2b7-7108-4830-96cd-85fbae0ed413" Jan 27 18:46:53 crc kubenswrapper[4853]: I0127 18:46:53.943926 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" podUID="84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" containerName="oauth-openshift" containerID="cri-o://cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a" gracePeriod=15 Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.409174 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.418771 4853 generic.go:334] "Generic (PLEG): container finished" podID="84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" containerID="cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a" exitCode=0 Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.418833 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.419090 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.419112 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.418830 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" event={"ID":"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708","Type":"ContainerDied","Data":"cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a"} Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.419180 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ltskb" event={"ID":"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708","Type":"ContainerDied","Data":"a86dc6a11d163f749fe7d3ea19e8f851274ffbfeaba1cf11dfd5a741ad1c934d"} Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.419200 4853 scope.go:117] "RemoveContainer" containerID="cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.441582 4853 scope.go:117] "RemoveContainer" containerID="cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a" Jan 27 18:46:54 crc kubenswrapper[4853]: E0127 18:46:54.442199 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a\": container with ID starting with cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a not found: ID does not exist" containerID="cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.442251 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a"} err="failed to get container status \"cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a\": rpc error: code = NotFound desc = could not find container \"cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a\": container with ID starting with cb49e535b49e4bb6860e8f77e6583b3d87db0a49fd11490c4672ac7cc87f926a not found: ID does not exist" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.452110 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-ocp-branding-template\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.452203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-trusted-ca-bundle\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.453513 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.458187 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553526 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-cliconfig\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553600 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-login\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553641 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-error\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553719 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-provider-selection\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553747 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-idp-0-file-data\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553772 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d5rk\" (UniqueName: \"kubernetes.io/projected/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-kube-api-access-4d5rk\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553811 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-service-ca\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553832 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-policies\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-session\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553906 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-router-certs\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553929 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-dir\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.553970 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-serving-cert\") pod \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\" (UID: \"84cfc910-d75a-4c74-9f5f-6ec4dbb5c708\") " Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.554478 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.554500 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.554726 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.555000 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.555084 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.555661 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.558888 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.568326 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.568489 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.568501 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-kube-api-access-4d5rk" (OuterVolumeSpecName: "kube-api-access-4d5rk") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "kube-api-access-4d5rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.569026 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.570380 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.570688 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.571038 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" (UID: "84cfc910-d75a-4c74-9f5f-6ec4dbb5c708"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655288 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655509 4853 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655525 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655537 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655552 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655569 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655581 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655595 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655608 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655620 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d5rk\" (UniqueName: \"kubernetes.io/projected/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-kube-api-access-4d5rk\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655634 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:54 crc kubenswrapper[4853]: I0127 18:46:54.655645 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:56 crc kubenswrapper[4853]: I0127 18:46:56.906200 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:56 crc kubenswrapper[4853]: I0127 18:46:56.910387 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:57 crc kubenswrapper[4853]: I0127 18:46:57.450893 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:58 crc kubenswrapper[4853]: I0127 18:46:58.128493 4853 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a6fda2b7-7108-4830-96cd-85fbae0ed413" Jan 27 18:47:02 crc kubenswrapper[4853]: I0127 18:47:02.568509 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:47:02 crc kubenswrapper[4853]: I0127 18:47:02.756304 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:47:02 crc kubenswrapper[4853]: I0127 18:47:02.810585 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:47:03 crc kubenswrapper[4853]: I0127 18:47:03.710371 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:47:04 crc kubenswrapper[4853]: I0127 18:47:04.330706 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:47:04 crc kubenswrapper[4853]: I0127 18:47:04.722920 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:47:04 crc kubenswrapper[4853]: I0127 18:47:04.921616 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.238051 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.349354 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.365453 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.594756 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.661836 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.769387 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.799472 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:47:05 crc kubenswrapper[4853]: I0127 18:47:05.925730 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.005833 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.179703 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.206411 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.208468 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.294917 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.326746 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.354816 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.399092 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.512847 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.544941 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.583339 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.767696 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.774243 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:47:06 crc kubenswrapper[4853]: I0127 18:47:06.935790 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.021249 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.330232 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.371209 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.449872 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.459367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.566940 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.686528 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.782345 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.874958 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.935687 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:47:07 crc kubenswrapper[4853]: I0127 18:47:07.996444 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.063532 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.103583 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.136912 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.183706 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.225447 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.253212 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.289070 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.305289 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.443010 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.465908 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.587709 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.639345 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.664573 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.664932 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.691915 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.713936 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.722857 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.759390 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.798464 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.803769 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.817042 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.833201 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.847538 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.860286 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.903450 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:47:08 crc kubenswrapper[4853]: I0127 18:47:08.956992 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.292067 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.369486 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.384974 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.389759 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.429449 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.431073 4853 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.515174 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.517024 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.579338 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.629233 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.630431 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.674255 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.778426 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.795539 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:47:09 crc kubenswrapper[4853]: I0127 18:47:09.907053 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.005304 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.014946 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.116566 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.157814 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.172575 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.197310 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.208740 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.234441 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.333573 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.353590 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.538404 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.631468 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.649394 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.652720 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.685517 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.697989 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.720662 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.736583 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.790544 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.808257 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.968909 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:47:10 crc kubenswrapper[4853]: I0127 18:47:10.993025 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.086614 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.144889 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.195690 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.286409 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.312993 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.446878 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.448357 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.459530 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.541251 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.582874 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.598004 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.604728 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.803752 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:47:11 crc kubenswrapper[4853]: I0127 18:47:11.875655 4853 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.155280 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.197138 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.213395 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.238417 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.399505 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.512832 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.623515 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.783183 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.792986 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.800350 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.861262 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.878044 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:47:12 crc kubenswrapper[4853]: I0127 18:47:12.901492 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.022260 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.053349 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.115746 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.130172 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.157321 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.162424 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.233841 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.236934 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.301084 4853 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.389588 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.437331 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.470931 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.499807 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.518379 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.619328 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.761632 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.854586 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.865669 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.869063 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.875613 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.927965 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.946534 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:47:13 crc kubenswrapper[4853]: I0127 18:47:13.986340 4853 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.010749 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.029767 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.031330 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.121063 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.153420 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.168440 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.226878 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.276888 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.303727 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.340927 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.352794 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.389798 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.429409 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.452411 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.498445 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.566550 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.622473 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.628173 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.682075 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.730568 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.752999 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.812488 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.931364 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:47:14 crc kubenswrapper[4853]: I0127 18:47:14.944778 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.012878 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.026984 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.082350 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.118974 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.154697 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.161812 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.392135 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.455193 4853 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.457355 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwxfb" podStartSLOduration=38.464912488 podStartE2EDuration="2m7.457336257s" podCreationTimestamp="2026-01-27 18:45:08 +0000 UTC" firstStartedPulling="2026-01-27 18:45:10.419213233 +0000 UTC m=+152.881756116" lastFinishedPulling="2026-01-27 18:46:39.411637002 +0000 UTC m=+241.874179885" observedRunningTime="2026-01-27 18:46:53.00611588 +0000 UTC m=+255.468658773" watchObservedRunningTime="2026-01-27 18:47:15.457336257 +0000 UTC m=+277.919879140" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.458095 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.458084358 podStartE2EDuration="43.458084358s" podCreationTimestamp="2026-01-27 18:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:46:53.0441729 +0000 UTC m=+255.506715793" watchObservedRunningTime="2026-01-27 18:47:15.458084358 +0000 UTC m=+277.920627241" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460062 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-752jj","openshift-authentication/oauth-openshift-558db77b4-ltskb","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460116 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5b8f568f87-wc7zl"] Jan 27 18:47:15 crc kubenswrapper[4853]: E0127 18:47:15.460325 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="extract-utilities" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460341 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="extract-utilities" Jan 27 18:47:15 crc kubenswrapper[4853]: E0127 18:47:15.460353 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" containerName="oauth-openshift" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460360 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" containerName="oauth-openshift" Jan 27 18:47:15 crc kubenswrapper[4853]: E0127 18:47:15.460373 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="registry-server" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460379 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="registry-server" Jan 27 18:47:15 crc kubenswrapper[4853]: E0127 18:47:15.460395 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" containerName="installer" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460401 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" containerName="installer" Jan 27 18:47:15 crc kubenswrapper[4853]: E0127 18:47:15.460408 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="extract-content" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460415 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="extract-content" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460464 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460488 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eccf4b23-863a-490c-ba35-1b03d360e200" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460524 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6208ed-203e-4fa0-8575-f593041cbc69" containerName="installer" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460539 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" containerName="oauth-openshift" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.460551 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" containerName="registry-server" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.461083 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.464892 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465325 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.464960 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.464962 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465036 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465194 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465197 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465242 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465243 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.465961 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.466335 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.467080 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.467934 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.472081 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.474395 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.480109 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.510065 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.510044548 podStartE2EDuration="22.510044548s" podCreationTimestamp="2026-01-27 18:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:15.487988948 +0000 UTC m=+277.950531831" watchObservedRunningTime="2026-01-27 18:47:15.510044548 +0000 UTC m=+277.972587431" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.519954 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.618368 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.625666 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-session\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.625744 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.625768 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-login\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.625830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.625852 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626089 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-audit-policies\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626302 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/531f346b-eb38-4240-9f9d-08e9492e3652-audit-dir\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626407 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626467 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626794 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhc7r\" (UniqueName: \"kubernetes.io/projected/531f346b-eb38-4240-9f9d-08e9492e3652-kube-api-access-hhc7r\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.626840 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-error\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.655323 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.688242 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.722117 4853 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.722411 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4" gracePeriod=5 Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728186 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728246 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhc7r\" (UniqueName: \"kubernetes.io/projected/531f346b-eb38-4240-9f9d-08e9492e3652-kube-api-access-hhc7r\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-error\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-session\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728599 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-login\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728711 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.728749 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-audit-policies\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.729333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.729443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.729473 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/531f346b-eb38-4240-9f9d-08e9492e3652-audit-dir\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.729505 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.730642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.730763 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.730826 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/531f346b-eb38-4240-9f9d-08e9492e3652-audit-dir\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.730839 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.731334 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/531f346b-eb38-4240-9f9d-08e9492e3652-audit-policies\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.734564 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.734841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-login\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.737516 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-error\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.737735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.738201 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.738828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-session\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.739174 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.741605 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/531f346b-eb38-4240-9f9d-08e9492e3652-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.750646 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.783603 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhc7r\" (UniqueName: \"kubernetes.io/projected/531f346b-eb38-4240-9f9d-08e9492e3652-kube-api-access-hhc7r\") pod \"oauth-openshift-5b8f568f87-wc7zl\" (UID: \"531f346b-eb38-4240-9f9d-08e9492e3652\") " pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.784053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.828003 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.914188 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:47:15 crc kubenswrapper[4853]: I0127 18:47:15.988636 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.119658 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b61ecec-2b42-40ef-b2c5-d719cc45ab64" path="/var/lib/kubelet/pods/5b61ecec-2b42-40ef-b2c5-d719cc45ab64/volumes" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.120452 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84cfc910-d75a-4c74-9f5f-6ec4dbb5c708" path="/var/lib/kubelet/pods/84cfc910-d75a-4c74-9f5f-6ec4dbb5c708/volumes" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.191150 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b8f568f87-wc7zl"] Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.224020 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.254784 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.504976 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.554833 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" event={"ID":"531f346b-eb38-4240-9f9d-08e9492e3652","Type":"ContainerStarted","Data":"07ea38ba3f6b4a82f8565d964da4630944ce2099c9cf56efffac7ab2f0ee6d20"} Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.559191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" event={"ID":"531f346b-eb38-4240-9f9d-08e9492e3652","Type":"ContainerStarted","Data":"02420d8abe688c6bd8445c9d2e65bdbd8681fae044c46fb559a8e6d6488a335a"} Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.559609 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.586206 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" podStartSLOduration=48.586187419 podStartE2EDuration="48.586187419s" podCreationTimestamp="2026-01-27 18:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:16.584516902 +0000 UTC m=+279.047059785" watchObservedRunningTime="2026-01-27 18:47:16.586187419 +0000 UTC m=+279.048730302" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.639084 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.640874 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.677448 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.746170 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.800838 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.851365 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.919476 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:47:16 crc kubenswrapper[4853]: I0127 18:47:16.975480 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.015862 4853 patch_prober.go:28] interesting pod/oauth-openshift-5b8f568f87-wc7zl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": read tcp 10.217.0.2:33108->10.217.0.62:6443: read: connection reset by peer" start-of-body= Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.016214 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" podUID="531f346b-eb38-4240-9f9d-08e9492e3652" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": read tcp 10.217.0.2:33108->10.217.0.62:6443: read: connection reset by peer" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.184779 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.298329 4853 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.307656 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.337040 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.378156 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.509013 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.518744 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.560581 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b8f568f87-wc7zl_531f346b-eb38-4240-9f9d-08e9492e3652/oauth-openshift/0.log" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.560843 4853 generic.go:334] "Generic (PLEG): container finished" podID="531f346b-eb38-4240-9f9d-08e9492e3652" containerID="07ea38ba3f6b4a82f8565d964da4630944ce2099c9cf56efffac7ab2f0ee6d20" exitCode=255 Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.560969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" event={"ID":"531f346b-eb38-4240-9f9d-08e9492e3652","Type":"ContainerDied","Data":"07ea38ba3f6b4a82f8565d964da4630944ce2099c9cf56efffac7ab2f0ee6d20"} Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.561276 4853 scope.go:117] "RemoveContainer" containerID="07ea38ba3f6b4a82f8565d964da4630944ce2099c9cf56efffac7ab2f0ee6d20" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.625940 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.650275 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.784594 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.845017 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:47:17 crc kubenswrapper[4853]: I0127 18:47:17.902451 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.041502 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.129186 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.187484 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.192784 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.257575 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.273438 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.378778 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.397350 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.482853 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.570040 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b8f568f87-wc7zl_531f346b-eb38-4240-9f9d-08e9492e3652/oauth-openshift/0.log" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.572500 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" event={"ID":"531f346b-eb38-4240-9f9d-08e9492e3652","Type":"ContainerStarted","Data":"1c5948e547d0b72610f3f4a7941acc32e2f5915627f77547dcfed9456fa9a731"} Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.576726 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.577953 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.581790 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b8f568f87-wc7zl" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.747994 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:47:18 crc kubenswrapper[4853]: I0127 18:47:18.825663 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:47:19 crc kubenswrapper[4853]: I0127 18:47:19.390857 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:47:19 crc kubenswrapper[4853]: I0127 18:47:19.711443 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:47:19 crc kubenswrapper[4853]: I0127 18:47:19.932767 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:47:20 crc kubenswrapper[4853]: I0127 18:47:20.367303 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:47:20 crc kubenswrapper[4853]: I0127 18:47:20.814667 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.088939 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.134244 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.179100 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.315772 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.315847 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.484888 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536608 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536693 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536729 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536915 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536858 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.536909 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.537105 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.537216 4853 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.537232 4853 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.537246 4853 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.537261 4853 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.550935 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.594515 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.594571 4853 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4" exitCode=137 Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.594627 4853 scope.go:117] "RemoveContainer" containerID="4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.594663 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.614787 4853 scope.go:117] "RemoveContainer" containerID="4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4" Jan 27 18:47:21 crc kubenswrapper[4853]: E0127 18:47:21.615352 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4\": container with ID starting with 4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4 not found: ID does not exist" containerID="4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.615408 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4"} err="failed to get container status \"4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4\": rpc error: code = NotFound desc = could not find container \"4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4\": container with ID starting with 4376aeaef4c363ed8a23306ad440621ec02a830e95fe8804058485be601916c4 not found: ID does not exist" Jan 27 18:47:21 crc kubenswrapper[4853]: I0127 18:47:21.638582 4853 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:22 crc kubenswrapper[4853]: I0127 18:47:22.119386 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 18:47:22 crc kubenswrapper[4853]: I0127 18:47:22.119692 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 18:47:22 crc kubenswrapper[4853]: I0127 18:47:22.130591 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:47:22 crc kubenswrapper[4853]: I0127 18:47:22.130633 4853 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="24ac0186-65e9-479b-9e8a-edd0b000df55" Jan 27 18:47:22 crc kubenswrapper[4853]: I0127 18:47:22.135201 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:47:22 crc kubenswrapper[4853]: I0127 18:47:22.135265 4853 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="24ac0186-65e9-479b-9e8a-edd0b000df55" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.364907 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-859bc87fd4-kspvl"] Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.366020 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" podUID="a567feea-5f51-4a2b-910e-37850da7cdfe" containerName="controller-manager" containerID="cri-o://2e0db576ada91682860112af8219949523f222fcb0abac8cb42c0386a28e24bb" gracePeriod=30 Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.467199 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v"] Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.467466 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" containerName="route-controller-manager" containerID="cri-o://6988a3d7761a06d1ce9f05c22299e8ecd76e734392523ee4e17991180802ba8a" gracePeriod=30 Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.614933 4853 generic.go:334] "Generic (PLEG): container finished" podID="682557db-b9bc-4004-a882-d2120d0ad94d" containerID="6988a3d7761a06d1ce9f05c22299e8ecd76e734392523ee4e17991180802ba8a" exitCode=0 Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.615371 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" event={"ID":"682557db-b9bc-4004-a882-d2120d0ad94d","Type":"ContainerDied","Data":"6988a3d7761a06d1ce9f05c22299e8ecd76e734392523ee4e17991180802ba8a"} Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.620442 4853 generic.go:334] "Generic (PLEG): container finished" podID="a567feea-5f51-4a2b-910e-37850da7cdfe" containerID="2e0db576ada91682860112af8219949523f222fcb0abac8cb42c0386a28e24bb" exitCode=0 Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.620466 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" event={"ID":"a567feea-5f51-4a2b-910e-37850da7cdfe","Type":"ContainerDied","Data":"2e0db576ada91682860112af8219949523f222fcb0abac8cb42c0386a28e24bb"} Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.757954 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.779758 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5bms\" (UniqueName: \"kubernetes.io/projected/a567feea-5f51-4a2b-910e-37850da7cdfe-kube-api-access-n5bms\") pod \"a567feea-5f51-4a2b-910e-37850da7cdfe\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.779884 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-config\") pod \"a567feea-5f51-4a2b-910e-37850da7cdfe\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.779910 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-client-ca\") pod \"a567feea-5f51-4a2b-910e-37850da7cdfe\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.779948 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-proxy-ca-bundles\") pod \"a567feea-5f51-4a2b-910e-37850da7cdfe\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.780007 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a567feea-5f51-4a2b-910e-37850da7cdfe-serving-cert\") pod \"a567feea-5f51-4a2b-910e-37850da7cdfe\" (UID: \"a567feea-5f51-4a2b-910e-37850da7cdfe\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.780895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-client-ca" (OuterVolumeSpecName: "client-ca") pod "a567feea-5f51-4a2b-910e-37850da7cdfe" (UID: "a567feea-5f51-4a2b-910e-37850da7cdfe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.780983 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-config" (OuterVolumeSpecName: "config") pod "a567feea-5f51-4a2b-910e-37850da7cdfe" (UID: "a567feea-5f51-4a2b-910e-37850da7cdfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.782937 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a567feea-5f51-4a2b-910e-37850da7cdfe" (UID: "a567feea-5f51-4a2b-910e-37850da7cdfe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.791141 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a567feea-5f51-4a2b-910e-37850da7cdfe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a567feea-5f51-4a2b-910e-37850da7cdfe" (UID: "a567feea-5f51-4a2b-910e-37850da7cdfe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.791297 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a567feea-5f51-4a2b-910e-37850da7cdfe-kube-api-access-n5bms" (OuterVolumeSpecName: "kube-api-access-n5bms") pod "a567feea-5f51-4a2b-910e-37850da7cdfe" (UID: "a567feea-5f51-4a2b-910e-37850da7cdfe"). InnerVolumeSpecName "kube-api-access-n5bms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.813223 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.881722 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-client-ca\") pod \"682557db-b9bc-4004-a882-d2120d0ad94d\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.881813 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-config\") pod \"682557db-b9bc-4004-a882-d2120d0ad94d\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.881902 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/682557db-b9bc-4004-a882-d2120d0ad94d-serving-cert\") pod \"682557db-b9bc-4004-a882-d2120d0ad94d\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.881979 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfzz\" (UniqueName: \"kubernetes.io/projected/682557db-b9bc-4004-a882-d2120d0ad94d-kube-api-access-kqfzz\") pod \"682557db-b9bc-4004-a882-d2120d0ad94d\" (UID: \"682557db-b9bc-4004-a882-d2120d0ad94d\") " Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.882286 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a567feea-5f51-4a2b-910e-37850da7cdfe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.882307 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5bms\" (UniqueName: \"kubernetes.io/projected/a567feea-5f51-4a2b-910e-37850da7cdfe-kube-api-access-n5bms\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.882325 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.882339 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.882349 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a567feea-5f51-4a2b-910e-37850da7cdfe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.882650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-client-ca" (OuterVolumeSpecName: "client-ca") pod "682557db-b9bc-4004-a882-d2120d0ad94d" (UID: "682557db-b9bc-4004-a882-d2120d0ad94d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.883345 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-config" (OuterVolumeSpecName: "config") pod "682557db-b9bc-4004-a882-d2120d0ad94d" (UID: "682557db-b9bc-4004-a882-d2120d0ad94d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.884844 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682557db-b9bc-4004-a882-d2120d0ad94d-kube-api-access-kqfzz" (OuterVolumeSpecName: "kube-api-access-kqfzz") pod "682557db-b9bc-4004-a882-d2120d0ad94d" (UID: "682557db-b9bc-4004-a882-d2120d0ad94d"). InnerVolumeSpecName "kube-api-access-kqfzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.884948 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682557db-b9bc-4004-a882-d2120d0ad94d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "682557db-b9bc-4004-a882-d2120d0ad94d" (UID: "682557db-b9bc-4004-a882-d2120d0ad94d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.983434 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqfzz\" (UniqueName: \"kubernetes.io/projected/682557db-b9bc-4004-a882-d2120d0ad94d-kube-api-access-kqfzz\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.983477 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.983490 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682557db-b9bc-4004-a882-d2120d0ad94d-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:24 crc kubenswrapper[4853]: I0127 18:47:24.983502 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/682557db-b9bc-4004-a882-d2120d0ad94d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.450191 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-68kpl"] Jan 27 18:47:25 crc kubenswrapper[4853]: E0127 18:47:25.450876 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.450892 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:47:25 crc kubenswrapper[4853]: E0127 18:47:25.450908 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" containerName="route-controller-manager" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.450938 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" containerName="route-controller-manager" Jan 27 18:47:25 crc kubenswrapper[4853]: E0127 18:47:25.450963 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a567feea-5f51-4a2b-910e-37850da7cdfe" containerName="controller-manager" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.450976 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a567feea-5f51-4a2b-910e-37850da7cdfe" containerName="controller-manager" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.451101 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.451111 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" containerName="route-controller-manager" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.451135 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a567feea-5f51-4a2b-910e-37850da7cdfe" containerName="controller-manager" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.451510 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.457885 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-68kpl"] Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.490014 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-client-ca\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.490066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfa646e-a193-473a-8aaf-402692afd1e9-serving-cert\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.490084 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9gw\" (UniqueName: \"kubernetes.io/projected/abfa646e-a193-473a-8aaf-402692afd1e9-kube-api-access-sx9gw\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.490155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-config\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.490197 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-proxy-ca-bundles\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.592197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-client-ca\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.592261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfa646e-a193-473a-8aaf-402692afd1e9-serving-cert\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.592286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9gw\" (UniqueName: \"kubernetes.io/projected/abfa646e-a193-473a-8aaf-402692afd1e9-kube-api-access-sx9gw\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.592307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-config\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.592322 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-proxy-ca-bundles\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.593492 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-client-ca\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.593941 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-config\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.593482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-proxy-ca-bundles\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.596484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfa646e-a193-473a-8aaf-402692afd1e9-serving-cert\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.609604 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9gw\" (UniqueName: \"kubernetes.io/projected/abfa646e-a193-473a-8aaf-402692afd1e9-kube-api-access-sx9gw\") pod \"controller-manager-777d4874d5-68kpl\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.628038 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" event={"ID":"682557db-b9bc-4004-a882-d2120d0ad94d","Type":"ContainerDied","Data":"3ffc27c9b1dfb9fc5a639d651578d92cb86217098dfc928ce8b7f2422379eba2"} Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.628113 4853 scope.go:117] "RemoveContainer" containerID="6988a3d7761a06d1ce9f05c22299e8ecd76e734392523ee4e17991180802ba8a" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.628724 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.630313 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" event={"ID":"a567feea-5f51-4a2b-910e-37850da7cdfe","Type":"ContainerDied","Data":"d03d51c208e9dc4fc5e655a8100fc7863407251a8cb81bc628fa4eda416bb825"} Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.630450 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bc87fd4-kspvl" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.654589 4853 scope.go:117] "RemoveContainer" containerID="2e0db576ada91682860112af8219949523f222fcb0abac8cb42c0386a28e24bb" Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.667613 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v"] Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.674007 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d4d6c6d-tn75v"] Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.678350 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-859bc87fd4-kspvl"] Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.682248 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-859bc87fd4-kspvl"] Jan 27 18:47:25 crc kubenswrapper[4853]: I0127 18:47:25.767741 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.122989 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682557db-b9bc-4004-a882-d2120d0ad94d" path="/var/lib/kubelet/pods/682557db-b9bc-4004-a882-d2120d0ad94d/volumes" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.124136 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a567feea-5f51-4a2b-910e-37850da7cdfe" path="/var/lib/kubelet/pods/a567feea-5f51-4a2b-910e-37850da7cdfe/volumes" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.163740 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-68kpl"] Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.453142 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt"] Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.454776 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.458290 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.458561 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.458711 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.458864 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.458978 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.464653 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.466242 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt"] Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.503995 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzj2\" (UniqueName: \"kubernetes.io/projected/f942c044-c796-4b61-9546-a7e49727d38f-kube-api-access-zzzj2\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.504056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-config\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.504109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f942c044-c796-4b61-9546-a7e49727d38f-serving-cert\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.504156 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-client-ca\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.605975 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzj2\" (UniqueName: \"kubernetes.io/projected/f942c044-c796-4b61-9546-a7e49727d38f-kube-api-access-zzzj2\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.606022 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-config\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.606063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f942c044-c796-4b61-9546-a7e49727d38f-serving-cert\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.606085 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-client-ca\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.606960 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-client-ca\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.607759 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-config\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.612405 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f942c044-c796-4b61-9546-a7e49727d38f-serving-cert\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.627884 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzj2\" (UniqueName: \"kubernetes.io/projected/f942c044-c796-4b61-9546-a7e49727d38f-kube-api-access-zzzj2\") pod \"route-controller-manager-857458fcf9-blgrt\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.635464 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" event={"ID":"abfa646e-a193-473a-8aaf-402692afd1e9","Type":"ContainerStarted","Data":"5465c1d479095259cd77aee263b197cc20cf776c090b179617f545f5793772e0"} Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.635511 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" event={"ID":"abfa646e-a193-473a-8aaf-402692afd1e9","Type":"ContainerStarted","Data":"a8e9339b07eaa00dae6d5112543eb0046184e532ce308dbcaafd7835ce0f8aad"} Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.636587 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.640411 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.657497 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" podStartSLOduration=2.6574766309999998 podStartE2EDuration="2.657476631s" podCreationTimestamp="2026-01-27 18:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:26.649478746 +0000 UTC m=+289.112021629" watchObservedRunningTime="2026-01-27 18:47:26.657476631 +0000 UTC m=+289.120019504" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.771056 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.986548 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pflg"] Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.987116 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pflg" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="registry-server" containerID="cri-o://c6628e44eabb7208ef73800dbc80762643db9112b7ef9236a2f9d25865b7af20" gracePeriod=30 Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.990949 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p74jj"] Jan 27 18:47:26 crc kubenswrapper[4853]: I0127 18:47:26.991255 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p74jj" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="registry-server" containerID="cri-o://b1c23ac34259cf3dcd69218e17baa02d64f2b8f68bc7b19ce1cea5151d1d9f23" gracePeriod=30 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:26.999671 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ck67k"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:26.999918 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ck67k" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="registry-server" containerID="cri-o://d3a74321129291ab923303682d43d24ad99ae4b013da74c710ace8aeb5c80209" gracePeriod=30 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.028378 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bxk4b"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.028667 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" containerID="cri-o://3295b1a762b2d1b48fb2503245be19cad6111ea21dd6d3ca23f068622688bff8" gracePeriod=30 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.033905 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.039198 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chjrw"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.039466 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-chjrw" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="registry-server" containerID="cri-o://571e9508d2976939513b3ffa1941f5e2601ba967981e9f77072abae9a5820c33" gracePeriod=30 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.044379 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxfb"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.044653 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwxfb" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="registry-server" containerID="cri-o://34dcd3e2524055eb7aaa2be37ab710bdfaf2c776d187d2aa036b644febf2be85" gracePeriod=30 Jan 27 18:47:27 crc kubenswrapper[4853]: W0127 18:47:27.051357 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf942c044_c796_4b61_9546_a7e49727d38f.slice/crio-1ef4eb46908dc3ec99579d1104e1861c2556b1a9fe1fe56173f106280be1c9b8 WatchSource:0}: Error finding container 1ef4eb46908dc3ec99579d1104e1861c2556b1a9fe1fe56173f106280be1c9b8: Status 404 returned error can't find the container with id 1ef4eb46908dc3ec99579d1104e1861c2556b1a9fe1fe56173f106280be1c9b8 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.068785 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vz77p"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.074187 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vz77p" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="registry-server" containerID="cri-o://4349c7e838986001ccb370b67833b0fb8fc25ee79a84bc9cb42f509044c70b20" gracePeriod=30 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.088885 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2rhc"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.089668 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.096904 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2rhc"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.117877 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beef4152-90a1-4027-8971-dd9dbdd93fb3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.117977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beef4152-90a1-4027-8971-dd9dbdd93fb3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.117996 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggl4\" (UniqueName: \"kubernetes.io/projected/beef4152-90a1-4027-8971-dd9dbdd93fb3-kube-api-access-qggl4\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.219455 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beef4152-90a1-4027-8971-dd9dbdd93fb3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.219514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggl4\" (UniqueName: \"kubernetes.io/projected/beef4152-90a1-4027-8971-dd9dbdd93fb3-kube-api-access-qggl4\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.219598 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beef4152-90a1-4027-8971-dd9dbdd93fb3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.221880 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beef4152-90a1-4027-8971-dd9dbdd93fb3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.229742 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beef4152-90a1-4027-8971-dd9dbdd93fb3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.238720 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggl4\" (UniqueName: \"kubernetes.io/projected/beef4152-90a1-4027-8971-dd9dbdd93fb3-kube-api-access-qggl4\") pod \"marketplace-operator-79b997595-x2rhc\" (UID: \"beef4152-90a1-4027-8971-dd9dbdd93fb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.645780 4853 generic.go:334] "Generic (PLEG): container finished" podID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerID="c6628e44eabb7208ef73800dbc80762643db9112b7ef9236a2f9d25865b7af20" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.645849 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pflg" event={"ID":"a442dc5b-e830-490b-8ad1-6a6606fea52b","Type":"ContainerDied","Data":"c6628e44eabb7208ef73800dbc80762643db9112b7ef9236a2f9d25865b7af20"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.646165 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pflg" event={"ID":"a442dc5b-e830-490b-8ad1-6a6606fea52b","Type":"ContainerDied","Data":"c15c999011ede48e043ec36e6f37d0206b69f723c7c2857166bab5a232406b96"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.646177 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c15c999011ede48e043ec36e6f37d0206b69f723c7c2857166bab5a232406b96" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.648729 4853 generic.go:334] "Generic (PLEG): container finished" podID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerID="571e9508d2976939513b3ffa1941f5e2601ba967981e9f77072abae9a5820c33" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.648785 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerDied","Data":"571e9508d2976939513b3ffa1941f5e2601ba967981e9f77072abae9a5820c33"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.648804 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chjrw" event={"ID":"d6cf1fd9-633e-45c8-b007-051a740ff435","Type":"ContainerDied","Data":"5745beb1602d8b2dc1452626fbe6201ccff605c2aae47216fd5876e8cea8ad3e"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.648818 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5745beb1602d8b2dc1452626fbe6201ccff605c2aae47216fd5876e8cea8ad3e" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.650927 4853 generic.go:334] "Generic (PLEG): container finished" podID="bdf3b7ad-3545-4192-941a-862154002694" containerID="d3a74321129291ab923303682d43d24ad99ae4b013da74c710ace8aeb5c80209" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.650981 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerDied","Data":"d3a74321129291ab923303682d43d24ad99ae4b013da74c710ace8aeb5c80209"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.650998 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck67k" event={"ID":"bdf3b7ad-3545-4192-941a-862154002694","Type":"ContainerDied","Data":"8ecbe4e887e908e5eb23d03cb5955712fd6c1d752d2a0dab0142762d21752172"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.651008 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ecbe4e887e908e5eb23d03cb5955712fd6c1d752d2a0dab0142762d21752172" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.652649 4853 generic.go:334] "Generic (PLEG): container finished" podID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerID="b1c23ac34259cf3dcd69218e17baa02d64f2b8f68bc7b19ce1cea5151d1d9f23" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.652692 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerDied","Data":"b1c23ac34259cf3dcd69218e17baa02d64f2b8f68bc7b19ce1cea5151d1d9f23"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.652716 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p74jj" event={"ID":"81272aef-67fa-4c09-bf30-56fdfec7dd7b","Type":"ContainerDied","Data":"433803f6544dd3c6fddfdb38079d3dae8f39d1bfdaba06144ef98b89b6c10cc6"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.652726 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433803f6544dd3c6fddfdb38079d3dae8f39d1bfdaba06144ef98b89b6c10cc6" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.653682 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" event={"ID":"f942c044-c796-4b61-9546-a7e49727d38f","Type":"ContainerStarted","Data":"1ef4eb46908dc3ec99579d1104e1861c2556b1a9fe1fe56173f106280be1c9b8"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.656004 4853 generic.go:334] "Generic (PLEG): container finished" podID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerID="34dcd3e2524055eb7aaa2be37ab710bdfaf2c776d187d2aa036b644febf2be85" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.656057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxfb" event={"ID":"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1","Type":"ContainerDied","Data":"34dcd3e2524055eb7aaa2be37ab710bdfaf2c776d187d2aa036b644febf2be85"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.656077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxfb" event={"ID":"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1","Type":"ContainerDied","Data":"45516a61e3397a28ca9d6b7c6d6f6e6f1e033e92f25e63c922b3c800e620cafc"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.656087 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45516a61e3397a28ca9d6b7c6d6f6e6f1e033e92f25e63c922b3c800e620cafc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.659986 4853 generic.go:334] "Generic (PLEG): container finished" podID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerID="3295b1a762b2d1b48fb2503245be19cad6111ea21dd6d3ca23f068622688bff8" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.660068 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" event={"ID":"86af8168-4922-4d5d-adee-38d4d88d55ca","Type":"ContainerDied","Data":"3295b1a762b2d1b48fb2503245be19cad6111ea21dd6d3ca23f068622688bff8"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.660098 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" event={"ID":"86af8168-4922-4d5d-adee-38d4d88d55ca","Type":"ContainerDied","Data":"98cb5a80a2f9c010b9faaffc231bb8e25683f3f8342523a860b7c4ec5520fc9a"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.660112 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98cb5a80a2f9c010b9faaffc231bb8e25683f3f8342523a860b7c4ec5520fc9a" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.662581 4853 generic.go:334] "Generic (PLEG): container finished" podID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerID="4349c7e838986001ccb370b67833b0fb8fc25ee79a84bc9cb42f509044c70b20" exitCode=0 Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.663228 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerDied","Data":"4349c7e838986001ccb370b67833b0fb8fc25ee79a84bc9cb42f509044c70b20"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.663294 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz77p" event={"ID":"8fc489cf-508e-445d-ba19-4aeea8afee8c","Type":"ContainerDied","Data":"529863dfe0ab72840a1f449dbd5a7b42da75cb124e984c88641a5ac80bf13024"} Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.663324 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="529863dfe0ab72840a1f449dbd5a7b42da75cb124e984c88641a5ac80bf13024" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.663771 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.665455 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.711406 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.716711 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.723866 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.726234 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-trusted-ca\") pod \"86af8168-4922-4d5d-adee-38d4d88d55ca\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.726301 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5jlv\" (UniqueName: \"kubernetes.io/projected/86af8168-4922-4d5d-adee-38d4d88d55ca-kube-api-access-x5jlv\") pod \"86af8168-4922-4d5d-adee-38d4d88d55ca\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.726432 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-operator-metrics\") pod \"86af8168-4922-4d5d-adee-38d4d88d55ca\" (UID: \"86af8168-4922-4d5d-adee-38d4d88d55ca\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.727344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "86af8168-4922-4d5d-adee-38d4d88d55ca" (UID: "86af8168-4922-4d5d-adee-38d4d88d55ca"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.729493 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.738472 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86af8168-4922-4d5d-adee-38d4d88d55ca-kube-api-access-x5jlv" (OuterVolumeSpecName: "kube-api-access-x5jlv") pod "86af8168-4922-4d5d-adee-38d4d88d55ca" (UID: "86af8168-4922-4d5d-adee-38d4d88d55ca"). InnerVolumeSpecName "kube-api-access-x5jlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.738488 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "86af8168-4922-4d5d-adee-38d4d88d55ca" (UID: "86af8168-4922-4d5d-adee-38d4d88d55ca"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.743799 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.765358 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.834833 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-catalog-content\") pod \"8fc489cf-508e-445d-ba19-4aeea8afee8c\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835215 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6r8w\" (UniqueName: \"kubernetes.io/projected/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-kube-api-access-d6r8w\") pod \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835267 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-catalog-content\") pod \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835304 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgmb\" (UniqueName: \"kubernetes.io/projected/bdf3b7ad-3545-4192-941a-862154002694-kube-api-access-bpgmb\") pod \"bdf3b7ad-3545-4192-941a-862154002694\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835333 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-catalog-content\") pod \"d6cf1fd9-633e-45c8-b007-051a740ff435\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835359 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-utilities\") pod \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\" (UID: \"aa41f430-60c0-4d83-96bc-ac2a6aa2dde1\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835393 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-utilities\") pod \"a442dc5b-e830-490b-8ad1-6a6606fea52b\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835437 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-catalog-content\") pod \"bdf3b7ad-3545-4192-941a-862154002694\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835495 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-catalog-content\") pod \"a442dc5b-e830-490b-8ad1-6a6606fea52b\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835528 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-utilities\") pod \"bdf3b7ad-3545-4192-941a-862154002694\" (UID: \"bdf3b7ad-3545-4192-941a-862154002694\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835653 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-utilities\") pod \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835758 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-catalog-content\") pod \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835790 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-utilities\") pod \"8fc489cf-508e-445d-ba19-4aeea8afee8c\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.835991 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-utilities\") pod \"d6cf1fd9-633e-45c8-b007-051a740ff435\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.836216 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fwnw\" (UniqueName: \"kubernetes.io/projected/81272aef-67fa-4c09-bf30-56fdfec7dd7b-kube-api-access-2fwnw\") pod \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\" (UID: \"81272aef-67fa-4c09-bf30-56fdfec7dd7b\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.836252 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95z5d\" (UniqueName: \"kubernetes.io/projected/a442dc5b-e830-490b-8ad1-6a6606fea52b-kube-api-access-95z5d\") pod \"a442dc5b-e830-490b-8ad1-6a6606fea52b\" (UID: \"a442dc5b-e830-490b-8ad1-6a6606fea52b\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.836472 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfx28\" (UniqueName: \"kubernetes.io/projected/d6cf1fd9-633e-45c8-b007-051a740ff435-kube-api-access-lfx28\") pod \"d6cf1fd9-633e-45c8-b007-051a740ff435\" (UID: \"d6cf1fd9-633e-45c8-b007-051a740ff435\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.836579 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9694\" (UniqueName: \"kubernetes.io/projected/8fc489cf-508e-445d-ba19-4aeea8afee8c-kube-api-access-h9694\") pod \"8fc489cf-508e-445d-ba19-4aeea8afee8c\" (UID: \"8fc489cf-508e-445d-ba19-4aeea8afee8c\") " Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.837334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-utilities" (OuterVolumeSpecName: "utilities") pod "a442dc5b-e830-490b-8ad1-6a6606fea52b" (UID: "a442dc5b-e830-490b-8ad1-6a6606fea52b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.840636 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.840670 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.840682 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86af8168-4922-4d5d-adee-38d4d88d55ca-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.840692 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5jlv\" (UniqueName: \"kubernetes.io/projected/86af8168-4922-4d5d-adee-38d4d88d55ca-kube-api-access-x5jlv\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.841325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-kube-api-access-d6r8w" (OuterVolumeSpecName: "kube-api-access-d6r8w") pod "aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" (UID: "aa41f430-60c0-4d83-96bc-ac2a6aa2dde1"). InnerVolumeSpecName "kube-api-access-d6r8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.841965 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf3b7ad-3545-4192-941a-862154002694-kube-api-access-bpgmb" (OuterVolumeSpecName: "kube-api-access-bpgmb") pod "bdf3b7ad-3545-4192-941a-862154002694" (UID: "bdf3b7ad-3545-4192-941a-862154002694"). InnerVolumeSpecName "kube-api-access-bpgmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.845487 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81272aef-67fa-4c09-bf30-56fdfec7dd7b-kube-api-access-2fwnw" (OuterVolumeSpecName: "kube-api-access-2fwnw") pod "81272aef-67fa-4c09-bf30-56fdfec7dd7b" (UID: "81272aef-67fa-4c09-bf30-56fdfec7dd7b"). InnerVolumeSpecName "kube-api-access-2fwnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.846048 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a442dc5b-e830-490b-8ad1-6a6606fea52b-kube-api-access-95z5d" (OuterVolumeSpecName: "kube-api-access-95z5d") pod "a442dc5b-e830-490b-8ad1-6a6606fea52b" (UID: "a442dc5b-e830-490b-8ad1-6a6606fea52b"). InnerVolumeSpecName "kube-api-access-95z5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.846355 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-utilities" (OuterVolumeSpecName: "utilities") pod "8fc489cf-508e-445d-ba19-4aeea8afee8c" (UID: "8fc489cf-508e-445d-ba19-4aeea8afee8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.846705 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-utilities" (OuterVolumeSpecName: "utilities") pod "d6cf1fd9-633e-45c8-b007-051a740ff435" (UID: "d6cf1fd9-633e-45c8-b007-051a740ff435"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.846748 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-utilities" (OuterVolumeSpecName: "utilities") pod "aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" (UID: "aa41f430-60c0-4d83-96bc-ac2a6aa2dde1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.847364 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-utilities" (OuterVolumeSpecName: "utilities") pod "bdf3b7ad-3545-4192-941a-862154002694" (UID: "bdf3b7ad-3545-4192-941a-862154002694"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.848326 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-utilities" (OuterVolumeSpecName: "utilities") pod "81272aef-67fa-4c09-bf30-56fdfec7dd7b" (UID: "81272aef-67fa-4c09-bf30-56fdfec7dd7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.853924 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc489cf-508e-445d-ba19-4aeea8afee8c-kube-api-access-h9694" (OuterVolumeSpecName: "kube-api-access-h9694") pod "8fc489cf-508e-445d-ba19-4aeea8afee8c" (UID: "8fc489cf-508e-445d-ba19-4aeea8afee8c"). InnerVolumeSpecName "kube-api-access-h9694". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.882633 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6cf1fd9-633e-45c8-b007-051a740ff435" (UID: "d6cf1fd9-633e-45c8-b007-051a740ff435"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.883946 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cf1fd9-633e-45c8-b007-051a740ff435-kube-api-access-lfx28" (OuterVolumeSpecName: "kube-api-access-lfx28") pod "d6cf1fd9-633e-45c8-b007-051a740ff435" (UID: "d6cf1fd9-633e-45c8-b007-051a740ff435"). InnerVolumeSpecName "kube-api-access-lfx28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.909081 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" (UID: "aa41f430-60c0-4d83-96bc-ac2a6aa2dde1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.927194 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a442dc5b-e830-490b-8ad1-6a6606fea52b" (UID: "a442dc5b-e830-490b-8ad1-6a6606fea52b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.935543 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81272aef-67fa-4c09-bf30-56fdfec7dd7b" (UID: "81272aef-67fa-4c09-bf30-56fdfec7dd7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.940950 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2rhc"] Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.941951 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.941994 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81272aef-67fa-4c09-bf30-56fdfec7dd7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942009 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942022 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942057 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fwnw\" (UniqueName: \"kubernetes.io/projected/81272aef-67fa-4c09-bf30-56fdfec7dd7b-kube-api-access-2fwnw\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942103 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95z5d\" (UniqueName: \"kubernetes.io/projected/a442dc5b-e830-490b-8ad1-6a6606fea52b-kube-api-access-95z5d\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942131 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfx28\" (UniqueName: \"kubernetes.io/projected/d6cf1fd9-633e-45c8-b007-051a740ff435-kube-api-access-lfx28\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942144 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9694\" (UniqueName: \"kubernetes.io/projected/8fc489cf-508e-445d-ba19-4aeea8afee8c-kube-api-access-h9694\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942156 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6r8w\" (UniqueName: \"kubernetes.io/projected/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-kube-api-access-d6r8w\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942167 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgmb\" (UniqueName: \"kubernetes.io/projected/bdf3b7ad-3545-4192-941a-862154002694-kube-api-access-bpgmb\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942178 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6cf1fd9-633e-45c8-b007-051a740ff435-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942189 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942199 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942210 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a442dc5b-e830-490b-8ad1-6a6606fea52b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.942222 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:27 crc kubenswrapper[4853]: I0127 18:47:27.950660 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdf3b7ad-3545-4192-941a-862154002694" (UID: "bdf3b7ad-3545-4192-941a-862154002694"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.014290 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fc489cf-508e-445d-ba19-4aeea8afee8c" (UID: "8fc489cf-508e-445d-ba19-4aeea8afee8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.043672 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc489cf-508e-445d-ba19-4aeea8afee8c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.043720 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf3b7ad-3545-4192-941a-862154002694-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.669524 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" event={"ID":"f942c044-c796-4b61-9546-a7e49727d38f","Type":"ContainerStarted","Data":"26312225e109ba1c9ab344e157881ebc17437a5da2050e92008685ca4bed94c3"} Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.669937 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672495 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" event={"ID":"beef4152-90a1-4027-8971-dd9dbdd93fb3","Type":"ContainerStarted","Data":"460f382b23f193d94a0f10ba057e6c56ef78cac8a008f30fc2661056c3fc2550"} Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" event={"ID":"beef4152-90a1-4027-8971-dd9dbdd93fb3","Type":"ContainerStarted","Data":"8593105083f1f400f03dde71d25b2f3f16da4d4cde0ba43e14e501f319330cfb"} Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672691 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bxk4b" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672761 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chjrw" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672748 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxfb" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672811 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck67k" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672844 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p74jj" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672903 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pflg" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.672933 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz77p" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.678171 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.693248 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" podStartSLOduration=4.693227018 podStartE2EDuration="4.693227018s" podCreationTimestamp="2026-01-27 18:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:28.6883367 +0000 UTC m=+291.150879583" watchObservedRunningTime="2026-01-27 18:47:28.693227018 +0000 UTC m=+291.155769901" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.759652 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" podStartSLOduration=1.7596326740000001 podStartE2EDuration="1.759632674s" podCreationTimestamp="2026-01-27 18:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:28.747835232 +0000 UTC m=+291.210378125" watchObservedRunningTime="2026-01-27 18:47:28.759632674 +0000 UTC m=+291.222175577" Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.765409 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bxk4b"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.769564 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bxk4b"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.774035 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pflg"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.781977 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6pflg"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.791033 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vz77p"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.797774 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vz77p"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.801357 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chjrw"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.804451 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-chjrw"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.810253 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p74jj"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.812945 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p74jj"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.818785 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ck67k"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.822375 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ck67k"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.828740 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxfb"] Jan 27 18:47:28 crc kubenswrapper[4853]: I0127 18:47:28.832352 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxfb"] Jan 27 18:47:29 crc kubenswrapper[4853]: I0127 18:47:29.678105 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:29 crc kubenswrapper[4853]: I0127 18:47:29.680574 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x2rhc" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.118765 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" path="/var/lib/kubelet/pods/81272aef-67fa-4c09-bf30-56fdfec7dd7b/volumes" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.119561 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" path="/var/lib/kubelet/pods/86af8168-4922-4d5d-adee-38d4d88d55ca/volumes" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.120374 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" path="/var/lib/kubelet/pods/8fc489cf-508e-445d-ba19-4aeea8afee8c/volumes" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.121594 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" path="/var/lib/kubelet/pods/a442dc5b-e830-490b-8ad1-6a6606fea52b/volumes" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.122260 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" path="/var/lib/kubelet/pods/aa41f430-60c0-4d83-96bc-ac2a6aa2dde1/volumes" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.123565 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf3b7ad-3545-4192-941a-862154002694" path="/var/lib/kubelet/pods/bdf3b7ad-3545-4192-941a-862154002694/volumes" Jan 27 18:47:30 crc kubenswrapper[4853]: I0127 18:47:30.124285 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" path="/var/lib/kubelet/pods/d6cf1fd9-633e-45c8-b007-051a740ff435/volumes" Jan 27 18:47:37 crc kubenswrapper[4853]: I0127 18:47:37.941078 4853 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.417733 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt"] Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.418929 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" podUID="f942c044-c796-4b61-9546-a7e49727d38f" containerName="route-controller-manager" containerID="cri-o://26312225e109ba1c9ab344e157881ebc17437a5da2050e92008685ca4bed94c3" gracePeriod=30 Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.424753 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-68kpl"] Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.425252 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" podUID="abfa646e-a193-473a-8aaf-402692afd1e9" containerName="controller-manager" containerID="cri-o://5465c1d479095259cd77aee263b197cc20cf776c090b179617f545f5793772e0" gracePeriod=30 Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.770651 4853 generic.go:334] "Generic (PLEG): container finished" podID="abfa646e-a193-473a-8aaf-402692afd1e9" containerID="5465c1d479095259cd77aee263b197cc20cf776c090b179617f545f5793772e0" exitCode=0 Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.770772 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" event={"ID":"abfa646e-a193-473a-8aaf-402692afd1e9","Type":"ContainerDied","Data":"5465c1d479095259cd77aee263b197cc20cf776c090b179617f545f5793772e0"} Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.773689 4853 generic.go:334] "Generic (PLEG): container finished" podID="f942c044-c796-4b61-9546-a7e49727d38f" containerID="26312225e109ba1c9ab344e157881ebc17437a5da2050e92008685ca4bed94c3" exitCode=0 Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.773785 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" event={"ID":"f942c044-c796-4b61-9546-a7e49727d38f","Type":"ContainerDied","Data":"26312225e109ba1c9ab344e157881ebc17437a5da2050e92008685ca4bed94c3"} Jan 27 18:47:44 crc kubenswrapper[4853]: I0127 18:47:44.990568 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.081055 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f942c044-c796-4b61-9546-a7e49727d38f-serving-cert\") pod \"f942c044-c796-4b61-9546-a7e49727d38f\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.081228 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-client-ca\") pod \"f942c044-c796-4b61-9546-a7e49727d38f\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.081309 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-config\") pod \"f942c044-c796-4b61-9546-a7e49727d38f\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.081405 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzj2\" (UniqueName: \"kubernetes.io/projected/f942c044-c796-4b61-9546-a7e49727d38f-kube-api-access-zzzj2\") pod \"f942c044-c796-4b61-9546-a7e49727d38f\" (UID: \"f942c044-c796-4b61-9546-a7e49727d38f\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.082053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-config" (OuterVolumeSpecName: "config") pod "f942c044-c796-4b61-9546-a7e49727d38f" (UID: "f942c044-c796-4b61-9546-a7e49727d38f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.082172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f942c044-c796-4b61-9546-a7e49727d38f" (UID: "f942c044-c796-4b61-9546-a7e49727d38f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.082964 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.082995 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f942c044-c796-4b61-9546-a7e49727d38f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.087048 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f942c044-c796-4b61-9546-a7e49727d38f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f942c044-c796-4b61-9546-a7e49727d38f" (UID: "f942c044-c796-4b61-9546-a7e49727d38f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.087399 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f942c044-c796-4b61-9546-a7e49727d38f-kube-api-access-zzzj2" (OuterVolumeSpecName: "kube-api-access-zzzj2") pod "f942c044-c796-4b61-9546-a7e49727d38f" (UID: "f942c044-c796-4b61-9546-a7e49727d38f"). InnerVolumeSpecName "kube-api-access-zzzj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.125471 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184049 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfa646e-a193-473a-8aaf-402692afd1e9-serving-cert\") pod \"abfa646e-a193-473a-8aaf-402692afd1e9\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184221 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-config\") pod \"abfa646e-a193-473a-8aaf-402692afd1e9\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184312 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx9gw\" (UniqueName: \"kubernetes.io/projected/abfa646e-a193-473a-8aaf-402692afd1e9-kube-api-access-sx9gw\") pod \"abfa646e-a193-473a-8aaf-402692afd1e9\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184335 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-proxy-ca-bundles\") pod \"abfa646e-a193-473a-8aaf-402692afd1e9\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184355 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-client-ca\") pod \"abfa646e-a193-473a-8aaf-402692afd1e9\" (UID: \"abfa646e-a193-473a-8aaf-402692afd1e9\") " Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184872 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzzj2\" (UniqueName: \"kubernetes.io/projected/f942c044-c796-4b61-9546-a7e49727d38f-kube-api-access-zzzj2\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.184893 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f942c044-c796-4b61-9546-a7e49727d38f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.185199 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "abfa646e-a193-473a-8aaf-402692afd1e9" (UID: "abfa646e-a193-473a-8aaf-402692afd1e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.185213 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "abfa646e-a193-473a-8aaf-402692afd1e9" (UID: "abfa646e-a193-473a-8aaf-402692afd1e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.185343 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-config" (OuterVolumeSpecName: "config") pod "abfa646e-a193-473a-8aaf-402692afd1e9" (UID: "abfa646e-a193-473a-8aaf-402692afd1e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.187613 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abfa646e-a193-473a-8aaf-402692afd1e9-kube-api-access-sx9gw" (OuterVolumeSpecName: "kube-api-access-sx9gw") pod "abfa646e-a193-473a-8aaf-402692afd1e9" (UID: "abfa646e-a193-473a-8aaf-402692afd1e9"). InnerVolumeSpecName "kube-api-access-sx9gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.188005 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abfa646e-a193-473a-8aaf-402692afd1e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abfa646e-a193-473a-8aaf-402692afd1e9" (UID: "abfa646e-a193-473a-8aaf-402692afd1e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.286574 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.286653 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx9gw\" (UniqueName: \"kubernetes.io/projected/abfa646e-a193-473a-8aaf-402692afd1e9-kube-api-access-sx9gw\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.286676 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.286691 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abfa646e-a193-473a-8aaf-402692afd1e9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.286707 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfa646e-a193-473a-8aaf-402692afd1e9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463481 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58bf4dd977-zq87r"] Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463829 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463848 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463863 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463872 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463882 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463892 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463901 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463909 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463919 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463926 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463939 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463967 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463978 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.463986 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.463996 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464003 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464012 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f942c044-c796-4b61-9546-a7e49727d38f" containerName="route-controller-manager" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464020 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f942c044-c796-4b61-9546-a7e49727d38f" containerName="route-controller-manager" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464030 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464039 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464049 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464056 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464066 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464073 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464084 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464092 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464100 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464108 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464142 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464150 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464161 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464169 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464180 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464188 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="extract-content" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464201 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abfa646e-a193-473a-8aaf-402692afd1e9" containerName="controller-manager" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464210 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="abfa646e-a193-473a-8aaf-402692afd1e9" containerName="controller-manager" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464220 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464230 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="extract-utilities" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464242 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464250 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: E0127 18:47:45.464259 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464267 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464375 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f942c044-c796-4b61-9546-a7e49727d38f" containerName="route-controller-manager" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464389 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa41f430-60c0-4d83-96bc-ac2a6aa2dde1" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464399 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="86af8168-4922-4d5d-adee-38d4d88d55ca" containerName="marketplace-operator" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464408 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="abfa646e-a193-473a-8aaf-402692afd1e9" containerName="controller-manager" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464417 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a442dc5b-e830-490b-8ad1-6a6606fea52b" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464427 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cf1fd9-633e-45c8-b007-051a740ff435" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464437 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="81272aef-67fa-4c09-bf30-56fdfec7dd7b" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464446 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf3b7ad-3545-4192-941a-862154002694" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.464456 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc489cf-508e-445d-ba19-4aeea8afee8c" containerName="registry-server" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.465040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.469701 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv"] Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.470546 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.473093 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58bf4dd977-zq87r"] Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.480244 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv"] Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.491625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-client-ca\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492038 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjj8\" (UniqueName: \"kubernetes.io/projected/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-kube-api-access-gkjj8\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492074 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-config\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492104 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-proxy-ca-bundles\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492163 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-config\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492197 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-serving-cert\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-client-ca\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl9ln\" (UniqueName: \"kubernetes.io/projected/f3424dd8-3475-40d4-b494-0d67eb7a4023-kube-api-access-sl9ln\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.492267 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3424dd8-3475-40d4-b494-0d67eb7a4023-serving-cert\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.593916 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjj8\" (UniqueName: \"kubernetes.io/projected/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-kube-api-access-gkjj8\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-config\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594281 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-proxy-ca-bundles\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-config\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594368 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-client-ca\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-serving-cert\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594418 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl9ln\" (UniqueName: \"kubernetes.io/projected/f3424dd8-3475-40d4-b494-0d67eb7a4023-kube-api-access-sl9ln\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594451 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3424dd8-3475-40d4-b494-0d67eb7a4023-serving-cert\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.594478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-client-ca\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.595856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-client-ca\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.596399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-client-ca\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.596938 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-config\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.597534 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-config\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.597890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-proxy-ca-bundles\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.601396 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-serving-cert\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.603620 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3424dd8-3475-40d4-b494-0d67eb7a4023-serving-cert\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.614996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjj8\" (UniqueName: \"kubernetes.io/projected/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-kube-api-access-gkjj8\") pod \"route-controller-manager-7cbb74dfd9-h75kv\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.615500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl9ln\" (UniqueName: \"kubernetes.io/projected/f3424dd8-3475-40d4-b494-0d67eb7a4023-kube-api-access-sl9ln\") pod \"controller-manager-58bf4dd977-zq87r\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.781656 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" event={"ID":"f942c044-c796-4b61-9546-a7e49727d38f","Type":"ContainerDied","Data":"1ef4eb46908dc3ec99579d1104e1861c2556b1a9fe1fe56173f106280be1c9b8"} Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.781719 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.781744 4853 scope.go:117] "RemoveContainer" containerID="26312225e109ba1c9ab344e157881ebc17437a5da2050e92008685ca4bed94c3" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.783690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" event={"ID":"abfa646e-a193-473a-8aaf-402692afd1e9","Type":"ContainerDied","Data":"a8e9339b07eaa00dae6d5112543eb0046184e532ce308dbcaafd7835ce0f8aad"} Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.783754 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777d4874d5-68kpl" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.788617 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.806013 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.821344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt"] Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.826221 4853 scope.go:117] "RemoveContainer" containerID="5465c1d479095259cd77aee263b197cc20cf776c090b179617f545f5793772e0" Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.829985 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-blgrt"] Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.838186 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-68kpl"] Jan 27 18:47:45 crc kubenswrapper[4853]: I0127 18:47:45.847987 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-68kpl"] Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.101809 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv"] Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.119869 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abfa646e-a193-473a-8aaf-402692afd1e9" path="/var/lib/kubelet/pods/abfa646e-a193-473a-8aaf-402692afd1e9/volumes" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.120648 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f942c044-c796-4b61-9546-a7e49727d38f" path="/var/lib/kubelet/pods/f942c044-c796-4b61-9546-a7e49727d38f/volumes" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.390496 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58bf4dd977-zq87r"] Jan 27 18:47:46 crc kubenswrapper[4853]: W0127 18:47:46.391705 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3424dd8_3475_40d4_b494_0d67eb7a4023.slice/crio-a788241f636e5f908495a921a5957e9729e7e700047ce3dac12db9c4e66fa14d WatchSource:0}: Error finding container a788241f636e5f908495a921a5957e9729e7e700047ce3dac12db9c4e66fa14d: Status 404 returned error can't find the container with id a788241f636e5f908495a921a5957e9729e7e700047ce3dac12db9c4e66fa14d Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.791466 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" event={"ID":"f3424dd8-3475-40d4-b494-0d67eb7a4023","Type":"ContainerStarted","Data":"612e89db5fac8845eeb09dba6787ba2c3ec50b32ef149c1bf6cef131bd54693a"} Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.791528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" event={"ID":"f3424dd8-3475-40d4-b494-0d67eb7a4023","Type":"ContainerStarted","Data":"a788241f636e5f908495a921a5957e9729e7e700047ce3dac12db9c4e66fa14d"} Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.791743 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.793289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" event={"ID":"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146","Type":"ContainerStarted","Data":"5d65f6a959fce123b8c1263dfaf6c4e23d8b39d23547f6a24d4f72c5ae677332"} Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.793358 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" event={"ID":"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146","Type":"ContainerStarted","Data":"1c0e71d78e4852cd8d150ee607da693fc1d5b768c921cbf761f062b3dc6506de"} Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.793693 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.799808 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.821031 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" podStartSLOduration=2.82100231 podStartE2EDuration="2.82100231s" podCreationTimestamp="2026-01-27 18:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:46.818459318 +0000 UTC m=+309.281002201" watchObservedRunningTime="2026-01-27 18:47:46.82100231 +0000 UTC m=+309.283545193" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.866914 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" podStartSLOduration=2.866886579 podStartE2EDuration="2.866886579s" podCreationTimestamp="2026-01-27 18:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:46.839722456 +0000 UTC m=+309.302265339" watchObservedRunningTime="2026-01-27 18:47:46.866886579 +0000 UTC m=+309.329429462" Jan 27 18:47:46 crc kubenswrapper[4853]: I0127 18:47:46.910687 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.424400 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9rvb"] Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.427043 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.429401 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.438170 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9rvb"] Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.520338 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbmth\" (UniqueName: \"kubernetes.io/projected/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-kube-api-access-lbmth\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.520420 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-catalog-content\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.520485 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-utilities\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.622200 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-catalog-content\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.622267 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-utilities\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.622345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbmth\" (UniqueName: \"kubernetes.io/projected/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-kube-api-access-lbmth\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.622833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-utilities\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.622833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-catalog-content\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.641932 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbmth\" (UniqueName: \"kubernetes.io/projected/4ccbf17f-6d23-4e6e-85e3-73c1275e767b-kube-api-access-lbmth\") pod \"certified-operators-l9rvb\" (UID: \"4ccbf17f-6d23-4e6e-85e3-73c1275e767b\") " pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:02 crc kubenswrapper[4853]: I0127 18:48:02.797445 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.019762 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rndjv"] Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.021311 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.023747 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.030158 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rndjv"] Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.128581 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-catalog-content\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.130166 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-utilities\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.130611 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmq88\" (UniqueName: \"kubernetes.io/projected/376dda10-dbbe-4b02-ba77-def58ad1db42-kube-api-access-pmq88\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.202246 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9rvb"] Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.232347 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-utilities\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.232421 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmq88\" (UniqueName: \"kubernetes.io/projected/376dda10-dbbe-4b02-ba77-def58ad1db42-kube-api-access-pmq88\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.232495 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-catalog-content\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.233026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-utilities\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.233057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-catalog-content\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.250101 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmq88\" (UniqueName: \"kubernetes.io/projected/376dda10-dbbe-4b02-ba77-def58ad1db42-kube-api-access-pmq88\") pod \"community-operators-rndjv\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.342002 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.777062 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rndjv"] Jan 27 18:48:03 crc kubenswrapper[4853]: W0127 18:48:03.783860 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376dda10_dbbe_4b02_ba77_def58ad1db42.slice/crio-470a3c49d66792856e2fb6eb169ee169fab197c9915546dd96d960b98792ff97 WatchSource:0}: Error finding container 470a3c49d66792856e2fb6eb169ee169fab197c9915546dd96d960b98792ff97: Status 404 returned error can't find the container with id 470a3c49d66792856e2fb6eb169ee169fab197c9915546dd96d960b98792ff97 Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.899140 4853 generic.go:334] "Generic (PLEG): container finished" podID="4ccbf17f-6d23-4e6e-85e3-73c1275e767b" containerID="1974ed273cd61395641bf28e9482477a9fcb424af7f1cd1d3ffe8b84bebfcf59" exitCode=0 Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.899219 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9rvb" event={"ID":"4ccbf17f-6d23-4e6e-85e3-73c1275e767b","Type":"ContainerDied","Data":"1974ed273cd61395641bf28e9482477a9fcb424af7f1cd1d3ffe8b84bebfcf59"} Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.899256 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9rvb" event={"ID":"4ccbf17f-6d23-4e6e-85e3-73c1275e767b","Type":"ContainerStarted","Data":"1fd93a8e70506dd71fccfe8aee2a2aad7fca2a8014378d2167cebe585456aa66"} Jan 27 18:48:03 crc kubenswrapper[4853]: I0127 18:48:03.904359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rndjv" event={"ID":"376dda10-dbbe-4b02-ba77-def58ad1db42","Type":"ContainerStarted","Data":"470a3c49d66792856e2fb6eb169ee169fab197c9915546dd96d960b98792ff97"} Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.814987 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tftml"] Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.816525 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.818415 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.825068 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tftml"] Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.867612 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4535e463-44ac-45f4-befb-6e68eae6e688-utilities\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.867642 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4535e463-44ac-45f4-befb-6e68eae6e688-catalog-content\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.867683 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsqr8\" (UniqueName: \"kubernetes.io/projected/4535e463-44ac-45f4-befb-6e68eae6e688-kube-api-access-wsqr8\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.913030 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rndjv" event={"ID":"376dda10-dbbe-4b02-ba77-def58ad1db42","Type":"ContainerDied","Data":"33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5"} Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.912799 4853 generic.go:334] "Generic (PLEG): container finished" podID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerID="33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5" exitCode=0 Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.969004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsqr8\" (UniqueName: \"kubernetes.io/projected/4535e463-44ac-45f4-befb-6e68eae6e688-kube-api-access-wsqr8\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.969109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4535e463-44ac-45f4-befb-6e68eae6e688-utilities\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.969141 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4535e463-44ac-45f4-befb-6e68eae6e688-catalog-content\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.970028 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4535e463-44ac-45f4-befb-6e68eae6e688-catalog-content\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.970075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4535e463-44ac-45f4-befb-6e68eae6e688-utilities\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:04 crc kubenswrapper[4853]: I0127 18:48:04.996795 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsqr8\" (UniqueName: \"kubernetes.io/projected/4535e463-44ac-45f4-befb-6e68eae6e688-kube-api-access-wsqr8\") pod \"redhat-operators-tftml\" (UID: \"4535e463-44ac-45f4-befb-6e68eae6e688\") " pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.154681 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.422195 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6kwgl"] Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.425241 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.427235 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.432470 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6kwgl"] Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.475155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-catalog-content\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.475214 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-utilities\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.475291 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t629z\" (UniqueName: \"kubernetes.io/projected/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-kube-api-access-t629z\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.548329 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tftml"] Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.575763 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t629z\" (UniqueName: \"kubernetes.io/projected/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-kube-api-access-t629z\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.575853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-catalog-content\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.575885 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-utilities\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.576374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-utilities\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.576742 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-catalog-content\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.596761 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t629z\" (UniqueName: \"kubernetes.io/projected/958cd7a3-4aba-4ee4-a63a-dc75ef76970f-kube-api-access-t629z\") pod \"redhat-marketplace-6kwgl\" (UID: \"958cd7a3-4aba-4ee4-a63a-dc75ef76970f\") " pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.743006 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.920509 4853 generic.go:334] "Generic (PLEG): container finished" podID="4535e463-44ac-45f4-befb-6e68eae6e688" containerID="4d274394fa58353df69ad392af05cf72daae68a8f0895496fa11edc5bbfd0813" exitCode=0 Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.920614 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tftml" event={"ID":"4535e463-44ac-45f4-befb-6e68eae6e688","Type":"ContainerDied","Data":"4d274394fa58353df69ad392af05cf72daae68a8f0895496fa11edc5bbfd0813"} Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.920658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tftml" event={"ID":"4535e463-44ac-45f4-befb-6e68eae6e688","Type":"ContainerStarted","Data":"b9913c9964012652919869c9ee9f372b4baa30a1c7773e168fc3f46dfcf86964"} Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.926233 4853 generic.go:334] "Generic (PLEG): container finished" podID="4ccbf17f-6d23-4e6e-85e3-73c1275e767b" containerID="b7b67ee5303281437176633a7b3b43a30b37cff9d289b392a05807e0ff1b2c6e" exitCode=0 Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.926289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9rvb" event={"ID":"4ccbf17f-6d23-4e6e-85e3-73c1275e767b","Type":"ContainerDied","Data":"b7b67ee5303281437176633a7b3b43a30b37cff9d289b392a05807e0ff1b2c6e"} Jan 27 18:48:05 crc kubenswrapper[4853]: I0127 18:48:05.984830 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6kwgl"] Jan 27 18:48:05 crc kubenswrapper[4853]: W0127 18:48:05.989335 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod958cd7a3_4aba_4ee4_a63a_dc75ef76970f.slice/crio-769c94e75b3c5a2913123363ea3a90216a2cddbbc55a8c6220d0350dbed28e78 WatchSource:0}: Error finding container 769c94e75b3c5a2913123363ea3a90216a2cddbbc55a8c6220d0350dbed28e78: Status 404 returned error can't find the container with id 769c94e75b3c5a2913123363ea3a90216a2cddbbc55a8c6220d0350dbed28e78 Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.934642 4853 generic.go:334] "Generic (PLEG): container finished" podID="958cd7a3-4aba-4ee4-a63a-dc75ef76970f" containerID="275ae3a7d34906f34aaa833d893d773215ebfcecfa69564368e40366b9ee54d3" exitCode=0 Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.934771 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kwgl" event={"ID":"958cd7a3-4aba-4ee4-a63a-dc75ef76970f","Type":"ContainerDied","Data":"275ae3a7d34906f34aaa833d893d773215ebfcecfa69564368e40366b9ee54d3"} Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.935208 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kwgl" event={"ID":"958cd7a3-4aba-4ee4-a63a-dc75ef76970f","Type":"ContainerStarted","Data":"769c94e75b3c5a2913123363ea3a90216a2cddbbc55a8c6220d0350dbed28e78"} Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.937325 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tftml" event={"ID":"4535e463-44ac-45f4-befb-6e68eae6e688","Type":"ContainerStarted","Data":"275eb0d12909f8ff585f11ef2f5bf31422f2b5af44c01067a798f550a94363f9"} Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.939389 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9rvb" event={"ID":"4ccbf17f-6d23-4e6e-85e3-73c1275e767b","Type":"ContainerStarted","Data":"440b2e52ad0a4d7aa176ec5b1ff5c88dbd4404eca59330ae32b6047adcb7d5c7"} Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.944195 4853 generic.go:334] "Generic (PLEG): container finished" podID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerID="dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0" exitCode=0 Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.944244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rndjv" event={"ID":"376dda10-dbbe-4b02-ba77-def58ad1db42","Type":"ContainerDied","Data":"dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0"} Jan 27 18:48:06 crc kubenswrapper[4853]: I0127 18:48:06.975961 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9rvb" podStartSLOduration=2.3188105820000002 podStartE2EDuration="4.97593813s" podCreationTimestamp="2026-01-27 18:48:02 +0000 UTC" firstStartedPulling="2026-01-27 18:48:03.900648023 +0000 UTC m=+326.363190906" lastFinishedPulling="2026-01-27 18:48:06.557775571 +0000 UTC m=+329.020318454" observedRunningTime="2026-01-27 18:48:06.974447187 +0000 UTC m=+329.436990070" watchObservedRunningTime="2026-01-27 18:48:06.97593813 +0000 UTC m=+329.438481013" Jan 27 18:48:07 crc kubenswrapper[4853]: I0127 18:48:07.952700 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rndjv" event={"ID":"376dda10-dbbe-4b02-ba77-def58ad1db42","Type":"ContainerStarted","Data":"6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8"} Jan 27 18:48:07 crc kubenswrapper[4853]: I0127 18:48:07.954666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kwgl" event={"ID":"958cd7a3-4aba-4ee4-a63a-dc75ef76970f","Type":"ContainerStarted","Data":"473cbb299500884e824d60dce27233cef4df157d2ca4e70b2289f7b6a88b4099"} Jan 27 18:48:07 crc kubenswrapper[4853]: I0127 18:48:07.956864 4853 generic.go:334] "Generic (PLEG): container finished" podID="4535e463-44ac-45f4-befb-6e68eae6e688" containerID="275eb0d12909f8ff585f11ef2f5bf31422f2b5af44c01067a798f550a94363f9" exitCode=0 Jan 27 18:48:07 crc kubenswrapper[4853]: I0127 18:48:07.956940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tftml" event={"ID":"4535e463-44ac-45f4-befb-6e68eae6e688","Type":"ContainerDied","Data":"275eb0d12909f8ff585f11ef2f5bf31422f2b5af44c01067a798f550a94363f9"} Jan 27 18:48:07 crc kubenswrapper[4853]: I0127 18:48:07.972098 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rndjv" podStartSLOduration=2.457738067 podStartE2EDuration="4.972074865s" podCreationTimestamp="2026-01-27 18:48:03 +0000 UTC" firstStartedPulling="2026-01-27 18:48:04.917394899 +0000 UTC m=+327.379937782" lastFinishedPulling="2026-01-27 18:48:07.431731697 +0000 UTC m=+329.894274580" observedRunningTime="2026-01-27 18:48:07.970324425 +0000 UTC m=+330.432867308" watchObservedRunningTime="2026-01-27 18:48:07.972074865 +0000 UTC m=+330.434617768" Jan 27 18:48:08 crc kubenswrapper[4853]: I0127 18:48:08.963754 4853 generic.go:334] "Generic (PLEG): container finished" podID="958cd7a3-4aba-4ee4-a63a-dc75ef76970f" containerID="473cbb299500884e824d60dce27233cef4df157d2ca4e70b2289f7b6a88b4099" exitCode=0 Jan 27 18:48:08 crc kubenswrapper[4853]: I0127 18:48:08.963825 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kwgl" event={"ID":"958cd7a3-4aba-4ee4-a63a-dc75ef76970f","Type":"ContainerDied","Data":"473cbb299500884e824d60dce27233cef4df157d2ca4e70b2289f7b6a88b4099"} Jan 27 18:48:08 crc kubenswrapper[4853]: I0127 18:48:08.965877 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tftml" event={"ID":"4535e463-44ac-45f4-befb-6e68eae6e688","Type":"ContainerStarted","Data":"d583ef9ed73baab8d39b580335ad7b554fcfece2ca4513c8a189334010ae2217"} Jan 27 18:48:09 crc kubenswrapper[4853]: I0127 18:48:09.006434 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tftml" podStartSLOduration=2.354177919 podStartE2EDuration="5.006417516s" podCreationTimestamp="2026-01-27 18:48:04 +0000 UTC" firstStartedPulling="2026-01-27 18:48:05.921991948 +0000 UTC m=+328.384534831" lastFinishedPulling="2026-01-27 18:48:08.574231525 +0000 UTC m=+331.036774428" observedRunningTime="2026-01-27 18:48:09.005583692 +0000 UTC m=+331.468126575" watchObservedRunningTime="2026-01-27 18:48:09.006417516 +0000 UTC m=+331.468960399" Jan 27 18:48:10 crc kubenswrapper[4853]: I0127 18:48:10.979338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kwgl" event={"ID":"958cd7a3-4aba-4ee4-a63a-dc75ef76970f","Type":"ContainerStarted","Data":"486e2b7ef89d90fb8ba6d57948fa71dfce13db464c53eaaffb1a060322b0afa8"} Jan 27 18:48:12 crc kubenswrapper[4853]: I0127 18:48:12.798493 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:12 crc kubenswrapper[4853]: I0127 18:48:12.798563 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:12 crc kubenswrapper[4853]: I0127 18:48:12.854937 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:12 crc kubenswrapper[4853]: I0127 18:48:12.875870 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6kwgl" podStartSLOduration=4.389075118 podStartE2EDuration="7.875847743s" podCreationTimestamp="2026-01-27 18:48:05 +0000 UTC" firstStartedPulling="2026-01-27 18:48:06.940820764 +0000 UTC m=+329.403363657" lastFinishedPulling="2026-01-27 18:48:10.427593399 +0000 UTC m=+332.890136282" observedRunningTime="2026-01-27 18:48:11.018768774 +0000 UTC m=+333.481311647" watchObservedRunningTime="2026-01-27 18:48:12.875847743 +0000 UTC m=+335.338390626" Jan 27 18:48:13 crc kubenswrapper[4853]: I0127 18:48:13.040503 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9rvb" Jan 27 18:48:13 crc kubenswrapper[4853]: I0127 18:48:13.342975 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:13 crc kubenswrapper[4853]: I0127 18:48:13.343620 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:13 crc kubenswrapper[4853]: I0127 18:48:13.379257 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:14 crc kubenswrapper[4853]: I0127 18:48:14.036201 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rndjv" Jan 27 18:48:15 crc kubenswrapper[4853]: I0127 18:48:15.155029 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:15 crc kubenswrapper[4853]: I0127 18:48:15.155085 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:15 crc kubenswrapper[4853]: I0127 18:48:15.216764 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:15 crc kubenswrapper[4853]: I0127 18:48:15.743900 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:15 crc kubenswrapper[4853]: I0127 18:48:15.743956 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:15 crc kubenswrapper[4853]: I0127 18:48:15.779552 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:16 crc kubenswrapper[4853]: I0127 18:48:16.052135 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6kwgl" Jan 27 18:48:16 crc kubenswrapper[4853]: I0127 18:48:16.053413 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tftml" Jan 27 18:48:32 crc kubenswrapper[4853]: I0127 18:48:32.909378 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rp448"] Jan 27 18:48:32 crc kubenswrapper[4853]: I0127 18:48:32.910618 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:32 crc kubenswrapper[4853]: I0127 18:48:32.919389 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rp448"] Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-registry-tls\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042626 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042681 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb8eadc8-9579-4c35-94b2-43312aa43ae8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb8eadc8-9579-4c35-94b2-43312aa43ae8-registry-certificates\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szqjm\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-kube-api-access-szqjm\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042764 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb8eadc8-9579-4c35-94b2-43312aa43ae8-trusted-ca\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042784 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-bound-sa-token\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.042865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb8eadc8-9579-4c35-94b2-43312aa43ae8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.063978 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-registry-tls\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144415 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb8eadc8-9579-4c35-94b2-43312aa43ae8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb8eadc8-9579-4c35-94b2-43312aa43ae8-registry-certificates\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144456 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szqjm\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-kube-api-access-szqjm\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144475 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb8eadc8-9579-4c35-94b2-43312aa43ae8-trusted-ca\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-bound-sa-token\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.144505 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb8eadc8-9579-4c35-94b2-43312aa43ae8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.146137 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb8eadc8-9579-4c35-94b2-43312aa43ae8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.146503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb8eadc8-9579-4c35-94b2-43312aa43ae8-registry-certificates\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.147107 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb8eadc8-9579-4c35-94b2-43312aa43ae8-trusted-ca\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.150041 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb8eadc8-9579-4c35-94b2-43312aa43ae8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.150086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-registry-tls\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.162136 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-bound-sa-token\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.162348 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szqjm\" (UniqueName: \"kubernetes.io/projected/bb8eadc8-9579-4c35-94b2-43312aa43ae8-kube-api-access-szqjm\") pod \"image-registry-66df7c8f76-rp448\" (UID: \"bb8eadc8-9579-4c35-94b2-43312aa43ae8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.224966 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:33 crc kubenswrapper[4853]: I0127 18:48:33.635747 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rp448"] Jan 27 18:48:34 crc kubenswrapper[4853]: I0127 18:48:34.120037 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:34 crc kubenswrapper[4853]: I0127 18:48:34.120085 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" event={"ID":"bb8eadc8-9579-4c35-94b2-43312aa43ae8","Type":"ContainerStarted","Data":"ac7489a18cbc4609647072b17780495848c4bb7cbb2e58dadba7a56247446b9f"} Jan 27 18:48:34 crc kubenswrapper[4853]: I0127 18:48:34.120107 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" event={"ID":"bb8eadc8-9579-4c35-94b2-43312aa43ae8","Type":"ContainerStarted","Data":"54c004cea7b7ba9f2e9d2c75b6a73818992ce6a24f37460d07ee9025c8e465e8"} Jan 27 18:48:34 crc kubenswrapper[4853]: I0127 18:48:34.141568 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" podStartSLOduration=2.141551028 podStartE2EDuration="2.141551028s" podCreationTimestamp="2026-01-27 18:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:48:34.138147451 +0000 UTC m=+356.600690334" watchObservedRunningTime="2026-01-27 18:48:34.141551028 +0000 UTC m=+356.604093911" Jan 27 18:48:35 crc kubenswrapper[4853]: I0127 18:48:35.541005 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:48:35 crc kubenswrapper[4853]: I0127 18:48:35.541317 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:48:44 crc kubenswrapper[4853]: I0127 18:48:44.829358 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58bf4dd977-zq87r"] Jan 27 18:48:44 crc kubenswrapper[4853]: I0127 18:48:44.830551 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" podUID="f3424dd8-3475-40d4-b494-0d67eb7a4023" containerName="controller-manager" containerID="cri-o://612e89db5fac8845eeb09dba6787ba2c3ec50b32ef149c1bf6cef131bd54693a" gracePeriod=30 Jan 27 18:48:44 crc kubenswrapper[4853]: I0127 18:48:44.896155 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv"] Jan 27 18:48:44 crc kubenswrapper[4853]: I0127 18:48:44.897210 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" podUID="e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" containerName="route-controller-manager" containerID="cri-o://5d65f6a959fce123b8c1263dfaf6c4e23d8b39d23547f6a24d4f72c5ae677332" gracePeriod=30 Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.192365 4853 generic.go:334] "Generic (PLEG): container finished" podID="f3424dd8-3475-40d4-b494-0d67eb7a4023" containerID="612e89db5fac8845eeb09dba6787ba2c3ec50b32ef149c1bf6cef131bd54693a" exitCode=0 Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.192483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" event={"ID":"f3424dd8-3475-40d4-b494-0d67eb7a4023","Type":"ContainerDied","Data":"612e89db5fac8845eeb09dba6787ba2c3ec50b32ef149c1bf6cef131bd54693a"} Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.195663 4853 generic.go:334] "Generic (PLEG): container finished" podID="e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" containerID="5d65f6a959fce123b8c1263dfaf6c4e23d8b39d23547f6a24d4f72c5ae677332" exitCode=0 Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.195839 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" event={"ID":"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146","Type":"ContainerDied","Data":"5d65f6a959fce123b8c1263dfaf6c4e23d8b39d23547f6a24d4f72c5ae677332"} Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.256152 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.311698 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.415940 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl9ln\" (UniqueName: \"kubernetes.io/projected/f3424dd8-3475-40d4-b494-0d67eb7a4023-kube-api-access-sl9ln\") pod \"f3424dd8-3475-40d4-b494-0d67eb7a4023\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416007 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjj8\" (UniqueName: \"kubernetes.io/projected/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-kube-api-access-gkjj8\") pod \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416031 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-serving-cert\") pod \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416063 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-config\") pod \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416091 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-client-ca\") pod \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\" (UID: \"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416135 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-config\") pod \"f3424dd8-3475-40d4-b494-0d67eb7a4023\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416154 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3424dd8-3475-40d4-b494-0d67eb7a4023-serving-cert\") pod \"f3424dd8-3475-40d4-b494-0d67eb7a4023\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416195 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-client-ca\") pod \"f3424dd8-3475-40d4-b494-0d67eb7a4023\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.416242 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-proxy-ca-bundles\") pod \"f3424dd8-3475-40d4-b494-0d67eb7a4023\" (UID: \"f3424dd8-3475-40d4-b494-0d67eb7a4023\") " Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.417287 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f3424dd8-3475-40d4-b494-0d67eb7a4023" (UID: "f3424dd8-3475-40d4-b494-0d67eb7a4023"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.417425 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-client-ca" (OuterVolumeSpecName: "client-ca") pod "f3424dd8-3475-40d4-b494-0d67eb7a4023" (UID: "f3424dd8-3475-40d4-b494-0d67eb7a4023"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.418058 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-config" (OuterVolumeSpecName: "config") pod "f3424dd8-3475-40d4-b494-0d67eb7a4023" (UID: "f3424dd8-3475-40d4-b494-0d67eb7a4023"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.418153 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-client-ca" (OuterVolumeSpecName: "client-ca") pod "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" (UID: "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.418321 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-config" (OuterVolumeSpecName: "config") pod "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" (UID: "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.423036 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3424dd8-3475-40d4-b494-0d67eb7a4023-kube-api-access-sl9ln" (OuterVolumeSpecName: "kube-api-access-sl9ln") pod "f3424dd8-3475-40d4-b494-0d67eb7a4023" (UID: "f3424dd8-3475-40d4-b494-0d67eb7a4023"). InnerVolumeSpecName "kube-api-access-sl9ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.423102 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" (UID: "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.423141 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-kube-api-access-gkjj8" (OuterVolumeSpecName: "kube-api-access-gkjj8") pod "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" (UID: "e7a8b2f2-2cb3-450f-98e4-78ceaf39e146"). InnerVolumeSpecName "kube-api-access-gkjj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.425516 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3424dd8-3475-40d4-b494-0d67eb7a4023-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f3424dd8-3475-40d4-b494-0d67eb7a4023" (UID: "f3424dd8-3475-40d4-b494-0d67eb7a4023"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517425 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517464 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517474 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl9ln\" (UniqueName: \"kubernetes.io/projected/f3424dd8-3475-40d4-b494-0d67eb7a4023-kube-api-access-sl9ln\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517483 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjj8\" (UniqueName: \"kubernetes.io/projected/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-kube-api-access-gkjj8\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517491 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517502 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517509 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517517 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3424dd8-3475-40d4-b494-0d67eb7a4023-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:45 crc kubenswrapper[4853]: I0127 18:48:45.517526 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3424dd8-3475-40d4-b494-0d67eb7a4023-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.206383 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.206332 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58bf4dd977-zq87r" event={"ID":"f3424dd8-3475-40d4-b494-0d67eb7a4023","Type":"ContainerDied","Data":"a788241f636e5f908495a921a5957e9729e7e700047ce3dac12db9c4e66fa14d"} Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.207162 4853 scope.go:117] "RemoveContainer" containerID="612e89db5fac8845eeb09dba6787ba2c3ec50b32ef149c1bf6cef131bd54693a" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.209437 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" event={"ID":"e7a8b2f2-2cb3-450f-98e4-78ceaf39e146","Type":"ContainerDied","Data":"1c0e71d78e4852cd8d150ee607da693fc1d5b768c921cbf761f062b3dc6506de"} Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.209518 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.231735 4853 scope.go:117] "RemoveContainer" containerID="5d65f6a959fce123b8c1263dfaf6c4e23d8b39d23547f6a24d4f72c5ae677332" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.237396 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58bf4dd977-zq87r"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.240496 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58bf4dd977-zq87r"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.253331 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.257273 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb74dfd9-h75kv"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.526735 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-d2j92"] Jan 27 18:48:46 crc kubenswrapper[4853]: E0127 18:48:46.527776 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" containerName="route-controller-manager" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.527804 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" containerName="route-controller-manager" Jan 27 18:48:46 crc kubenswrapper[4853]: E0127 18:48:46.527822 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3424dd8-3475-40d4-b494-0d67eb7a4023" containerName="controller-manager" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.527832 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3424dd8-3475-40d4-b494-0d67eb7a4023" containerName="controller-manager" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.528011 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3424dd8-3475-40d4-b494-0d67eb7a4023" containerName="controller-manager" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.528044 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" containerName="route-controller-manager" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.528796 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.530804 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.531232 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.531780 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.534379 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.534533 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.534581 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.534577 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.535027 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.535219 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.535441 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.538245 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.538273 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.538612 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.538614 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.543349 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.548768 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-d2j92"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.552154 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj"] Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.633573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bd0900-8efb-4678-9b05-395c21e192df-config\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.633865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bd0900-8efb-4678-9b05-395c21e192df-serving-cert\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.633976 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-proxy-ca-bundles\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.634044 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3bd0900-8efb-4678-9b05-395c21e192df-client-ca\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.634140 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-client-ca\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.634220 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db5aee63-36d0-4721-bee1-b2d730b53387-serving-cert\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.634374 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcl49\" (UniqueName: \"kubernetes.io/projected/b3bd0900-8efb-4678-9b05-395c21e192df-kube-api-access-zcl49\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.634454 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-config\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.634517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdst\" (UniqueName: \"kubernetes.io/projected/db5aee63-36d0-4721-bee1-b2d730b53387-kube-api-access-xsdst\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.735974 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-client-ca\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736062 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db5aee63-36d0-4721-bee1-b2d730b53387-serving-cert\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736203 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcl49\" (UniqueName: \"kubernetes.io/projected/b3bd0900-8efb-4678-9b05-395c21e192df-kube-api-access-zcl49\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736238 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-config\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdst\" (UniqueName: \"kubernetes.io/projected/db5aee63-36d0-4721-bee1-b2d730b53387-kube-api-access-xsdst\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736317 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bd0900-8efb-4678-9b05-395c21e192df-config\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736341 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bd0900-8efb-4678-9b05-395c21e192df-serving-cert\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-proxy-ca-bundles\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.736397 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3bd0900-8efb-4678-9b05-395c21e192df-client-ca\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.737767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3bd0900-8efb-4678-9b05-395c21e192df-client-ca\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.737925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-proxy-ca-bundles\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.738032 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bd0900-8efb-4678-9b05-395c21e192df-config\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.738398 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-client-ca\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.738558 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5aee63-36d0-4721-bee1-b2d730b53387-config\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.746886 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db5aee63-36d0-4721-bee1-b2d730b53387-serving-cert\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.755878 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bd0900-8efb-4678-9b05-395c21e192df-serving-cert\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.759472 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcl49\" (UniqueName: \"kubernetes.io/projected/b3bd0900-8efb-4678-9b05-395c21e192df-kube-api-access-zcl49\") pod \"route-controller-manager-857458fcf9-lkdlj\" (UID: \"b3bd0900-8efb-4678-9b05-395c21e192df\") " pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.759607 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdst\" (UniqueName: \"kubernetes.io/projected/db5aee63-36d0-4721-bee1-b2d730b53387-kube-api-access-xsdst\") pod \"controller-manager-777d4874d5-d2j92\" (UID: \"db5aee63-36d0-4721-bee1-b2d730b53387\") " pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.851559 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:46 crc kubenswrapper[4853]: I0127 18:48:46.861594 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:47 crc kubenswrapper[4853]: I0127 18:48:47.273405 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj"] Jan 27 18:48:47 crc kubenswrapper[4853]: W0127 18:48:47.283938 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3bd0900_8efb_4678_9b05_395c21e192df.slice/crio-b3692d9ed49add12e34cfdceb6d92650e5c47a2d9adfd4b7e7d5047259f8f9f4 WatchSource:0}: Error finding container b3692d9ed49add12e34cfdceb6d92650e5c47a2d9adfd4b7e7d5047259f8f9f4: Status 404 returned error can't find the container with id b3692d9ed49add12e34cfdceb6d92650e5c47a2d9adfd4b7e7d5047259f8f9f4 Jan 27 18:48:47 crc kubenswrapper[4853]: I0127 18:48:47.404735 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777d4874d5-d2j92"] Jan 27 18:48:47 crc kubenswrapper[4853]: W0127 18:48:47.418443 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb5aee63_36d0_4721_bee1_b2d730b53387.slice/crio-ca0bd3edc984cb710b2613ad3af47261312b838623c3e8b2fa91f3fd8901b050 WatchSource:0}: Error finding container ca0bd3edc984cb710b2613ad3af47261312b838623c3e8b2fa91f3fd8901b050: Status 404 returned error can't find the container with id ca0bd3edc984cb710b2613ad3af47261312b838623c3e8b2fa91f3fd8901b050 Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.119888 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a8b2f2-2cb3-450f-98e4-78ceaf39e146" path="/var/lib/kubelet/pods/e7a8b2f2-2cb3-450f-98e4-78ceaf39e146/volumes" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.121196 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3424dd8-3475-40d4-b494-0d67eb7a4023" path="/var/lib/kubelet/pods/f3424dd8-3475-40d4-b494-0d67eb7a4023/volumes" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.231070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" event={"ID":"b3bd0900-8efb-4678-9b05-395c21e192df","Type":"ContainerStarted","Data":"8835f7574989bf72b8aa962655ff960ee2f3c5964d1225b9edac41d3099f1826"} Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.231175 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" event={"ID":"b3bd0900-8efb-4678-9b05-395c21e192df","Type":"ContainerStarted","Data":"b3692d9ed49add12e34cfdceb6d92650e5c47a2d9adfd4b7e7d5047259f8f9f4"} Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.231583 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.233384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" event={"ID":"db5aee63-36d0-4721-bee1-b2d730b53387","Type":"ContainerStarted","Data":"79dbe166b7555592158893e09a55a29bfdf1758caa29207989a1d4780596ac47"} Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.233456 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" event={"ID":"db5aee63-36d0-4721-bee1-b2d730b53387","Type":"ContainerStarted","Data":"ca0bd3edc984cb710b2613ad3af47261312b838623c3e8b2fa91f3fd8901b050"} Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.233637 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.238065 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.240290 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.253930 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-857458fcf9-lkdlj" podStartSLOduration=4.253902272 podStartE2EDuration="4.253902272s" podCreationTimestamp="2026-01-27 18:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:48:48.253220413 +0000 UTC m=+370.715763296" watchObservedRunningTime="2026-01-27 18:48:48.253902272 +0000 UTC m=+370.716445165" Jan 27 18:48:48 crc kubenswrapper[4853]: I0127 18:48:48.277820 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-777d4874d5-d2j92" podStartSLOduration=4.277802977 podStartE2EDuration="4.277802977s" podCreationTimestamp="2026-01-27 18:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:48:48.275333706 +0000 UTC m=+370.737876589" watchObservedRunningTime="2026-01-27 18:48:48.277802977 +0000 UTC m=+370.740345850" Jan 27 18:48:53 crc kubenswrapper[4853]: I0127 18:48:53.234614 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rp448" Jan 27 18:48:53 crc kubenswrapper[4853]: I0127 18:48:53.301599 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-npp4j"] Jan 27 18:49:05 crc kubenswrapper[4853]: I0127 18:49:05.541405 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:49:05 crc kubenswrapper[4853]: I0127 18:49:05.542241 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.349168 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" podUID="a7c9b9f7-1d12-4e77-a47f-8cb601836611" containerName="registry" containerID="cri-o://b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc" gracePeriod=30 Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.782724 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.840957 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841002 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-kube-api-access-scl6q\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841078 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-trusted-ca\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841101 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7c9b9f7-1d12-4e77-a47f-8cb601836611-ca-trust-extracted\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841748 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841844 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7c9b9f7-1d12-4e77-a47f-8cb601836611-installation-pull-secrets\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841916 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-certificates\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841945 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-tls\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.841988 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-bound-sa-token\") pod \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\" (UID: \"a7c9b9f7-1d12-4e77-a47f-8cb601836611\") " Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.842232 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.843541 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.847021 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.847176 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c9b9f7-1d12-4e77-a47f-8cb601836611-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.847409 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-kube-api-access-scl6q" (OuterVolumeSpecName: "kube-api-access-scl6q") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "kube-api-access-scl6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.849227 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.849811 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.857380 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c9b9f7-1d12-4e77-a47f-8cb601836611-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a7c9b9f7-1d12-4e77-a47f-8cb601836611" (UID: "a7c9b9f7-1d12-4e77-a47f-8cb601836611"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.943215 4853 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7c9b9f7-1d12-4e77-a47f-8cb601836611-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.943273 4853 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.943283 4853 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.943293 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.943302 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/a7c9b9f7-1d12-4e77-a47f-8cb601836611-kube-api-access-scl6q\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.943312 4853 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7c9b9f7-1d12-4e77-a47f-8cb601836611-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.974475 4853 generic.go:334] "Generic (PLEG): container finished" podID="a7c9b9f7-1d12-4e77-a47f-8cb601836611" containerID="b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc" exitCode=0 Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.974541 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" event={"ID":"a7c9b9f7-1d12-4e77-a47f-8cb601836611","Type":"ContainerDied","Data":"b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc"} Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.974605 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" event={"ID":"a7c9b9f7-1d12-4e77-a47f-8cb601836611","Type":"ContainerDied","Data":"c825e28713bdf55cd460511837a7f50426c0c510d1f5ad57b76ca682afa2fe81"} Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.974625 4853 scope.go:117] "RemoveContainer" containerID="b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.974553 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-npp4j" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.994268 4853 scope.go:117] "RemoveContainer" containerID="b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc" Jan 27 18:49:18 crc kubenswrapper[4853]: E0127 18:49:18.995265 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc\": container with ID starting with b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc not found: ID does not exist" containerID="b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc" Jan 27 18:49:18 crc kubenswrapper[4853]: I0127 18:49:18.995321 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc"} err="failed to get container status \"b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc\": rpc error: code = NotFound desc = could not find container \"b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc\": container with ID starting with b03cfc440ce8b5d3556f49c2bf0afde636559945bdd8eb455d4744b9d20b86cc not found: ID does not exist" Jan 27 18:49:19 crc kubenswrapper[4853]: I0127 18:49:19.003863 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-npp4j"] Jan 27 18:49:19 crc kubenswrapper[4853]: I0127 18:49:19.007430 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-npp4j"] Jan 27 18:49:20 crc kubenswrapper[4853]: I0127 18:49:20.119227 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c9b9f7-1d12-4e77-a47f-8cb601836611" path="/var/lib/kubelet/pods/a7c9b9f7-1d12-4e77-a47f-8cb601836611/volumes" Jan 27 18:49:35 crc kubenswrapper[4853]: I0127 18:49:35.541711 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:49:35 crc kubenswrapper[4853]: I0127 18:49:35.542343 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:49:35 crc kubenswrapper[4853]: I0127 18:49:35.542399 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:49:35 crc kubenswrapper[4853]: I0127 18:49:35.544153 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c86af0fa16b8eb47abbf6eaa7c300570ebb612ee72008ab7450dc2bae5e201f2"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:49:35 crc kubenswrapper[4853]: I0127 18:49:35.544293 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://c86af0fa16b8eb47abbf6eaa7c300570ebb612ee72008ab7450dc2bae5e201f2" gracePeriod=600 Jan 27 18:49:36 crc kubenswrapper[4853]: I0127 18:49:36.065853 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="c86af0fa16b8eb47abbf6eaa7c300570ebb612ee72008ab7450dc2bae5e201f2" exitCode=0 Jan 27 18:49:36 crc kubenswrapper[4853]: I0127 18:49:36.065921 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"c86af0fa16b8eb47abbf6eaa7c300570ebb612ee72008ab7450dc2bae5e201f2"} Jan 27 18:49:36 crc kubenswrapper[4853]: I0127 18:49:36.066230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"5b81ace7e2777535cb6c01efce0eddea8127ab44f1a4252fee29729bdae6ce3c"} Jan 27 18:49:36 crc kubenswrapper[4853]: I0127 18:49:36.066262 4853 scope.go:117] "RemoveContainer" containerID="36ec3ff8ba2e6f89b4c9a8177dc986ce198013a4b5392fc600a191a9d854241a" Jan 27 18:51:35 crc kubenswrapper[4853]: I0127 18:51:35.541860 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:35 crc kubenswrapper[4853]: I0127 18:51:35.542544 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.450010 4853 scope.go:117] "RemoveContainer" containerID="3295b1a762b2d1b48fb2503245be19cad6111ea21dd6d3ca23f068622688bff8" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.468018 4853 scope.go:117] "RemoveContainer" containerID="bbe3e1278ee2ecb3603760ea6727963aa21d8230b843ae13fd40aed4bdd7e0b7" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.486333 4853 scope.go:117] "RemoveContainer" containerID="2c4bfc0e0328c1a63153e81b4c89617ba5656571e1f804b867aa7ce4ce2d0f59" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.500836 4853 scope.go:117] "RemoveContainer" containerID="066761b9a06cda8570b3019548ff33ae9591182ab7d99cd6a3cb280197168abe" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.520392 4853 scope.go:117] "RemoveContainer" containerID="5ce35b3994b69b3455867bba37ae487fecb7fda1c38151b594d22c64ea8109de" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.539423 4853 scope.go:117] "RemoveContainer" containerID="ba2560ce0a3001a0d45e9e989f35b0492ff706401996fd784bf1eddba3fac83f" Jan 27 18:51:50 crc kubenswrapper[4853]: I0127 18:51:50.564299 4853 scope.go:117] "RemoveContainer" containerID="1eb5cebd532853b23144982fc54bb8a1193498761d2378313ef7af66c94905d7" Jan 27 18:52:05 crc kubenswrapper[4853]: I0127 18:52:05.541035 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:52:05 crc kubenswrapper[4853]: I0127 18:52:05.541567 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:52:35 crc kubenswrapper[4853]: I0127 18:52:35.541548 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:52:35 crc kubenswrapper[4853]: I0127 18:52:35.542102 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:52:35 crc kubenswrapper[4853]: I0127 18:52:35.542165 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:52:35 crc kubenswrapper[4853]: I0127 18:52:35.542660 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b81ace7e2777535cb6c01efce0eddea8127ab44f1a4252fee29729bdae6ce3c"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:52:35 crc kubenswrapper[4853]: I0127 18:52:35.542715 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://5b81ace7e2777535cb6c01efce0eddea8127ab44f1a4252fee29729bdae6ce3c" gracePeriod=600 Jan 27 18:52:36 crc kubenswrapper[4853]: I0127 18:52:36.058969 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="5b81ace7e2777535cb6c01efce0eddea8127ab44f1a4252fee29729bdae6ce3c" exitCode=0 Jan 27 18:52:36 crc kubenswrapper[4853]: I0127 18:52:36.059052 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"5b81ace7e2777535cb6c01efce0eddea8127ab44f1a4252fee29729bdae6ce3c"} Jan 27 18:52:36 crc kubenswrapper[4853]: I0127 18:52:36.059374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"627fd940f35c1ba5723021e5e015bc2d268e6d0901ac54674b747706a8fc058b"} Jan 27 18:52:36 crc kubenswrapper[4853]: I0127 18:52:36.059404 4853 scope.go:117] "RemoveContainer" containerID="c86af0fa16b8eb47abbf6eaa7c300570ebb612ee72008ab7450dc2bae5e201f2" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.611693 4853 scope.go:117] "RemoveContainer" containerID="c6628e44eabb7208ef73800dbc80762643db9112b7ef9236a2f9d25865b7af20" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.636491 4853 scope.go:117] "RemoveContainer" containerID="571e9508d2976939513b3ffa1941f5e2601ba967981e9f77072abae9a5820c33" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.650584 4853 scope.go:117] "RemoveContainer" containerID="8c3aca04783c51d204b480d9f53b1a254a451cf3fe7c1f9b6edcc3fdf458d4d6" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.665421 4853 scope.go:117] "RemoveContainer" containerID="4349c7e838986001ccb370b67833b0fb8fc25ee79a84bc9cb42f509044c70b20" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.696543 4853 scope.go:117] "RemoveContainer" containerID="7cdacbe95301b2962805efbf942ff66b51d4c0cfed6f7932e82ab731bb09fc3d" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.718161 4853 scope.go:117] "RemoveContainer" containerID="34dcd3e2524055eb7aaa2be37ab710bdfaf2c776d187d2aa036b644febf2be85" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.739702 4853 scope.go:117] "RemoveContainer" containerID="297f43caef6ac9df030327cc94feecc229a79f511cd91d6dcf4013df9070632b" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.757782 4853 scope.go:117] "RemoveContainer" containerID="e14a2396fb1956ff243f59dacf2e664aea23bc918ac6fe8cd5284c1cc0384a85" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.782274 4853 scope.go:117] "RemoveContainer" containerID="b1c23ac34259cf3dcd69218e17baa02d64f2b8f68bc7b19ce1cea5151d1d9f23" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.806101 4853 scope.go:117] "RemoveContainer" containerID="efedee8e116b7f4b30a5bea959e50f5f6a01d3e619d48e9ba45c7b3cf8108006" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.827153 4853 scope.go:117] "RemoveContainer" containerID="d3a74321129291ab923303682d43d24ad99ae4b013da74c710ace8aeb5c80209" Jan 27 18:52:50 crc kubenswrapper[4853]: I0127 18:52:50.851728 4853 scope.go:117] "RemoveContainer" containerID="b17ab6daa8bf6d7c69d7898906157eb86f2be9c909d172401fedd76ffde2f25c" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.295375 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-85ml7"] Jan 27 18:52:58 crc kubenswrapper[4853]: E0127 18:52:58.296216 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c9b9f7-1d12-4e77-a47f-8cb601836611" containerName="registry" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.296233 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c9b9f7-1d12-4e77-a47f-8cb601836611" containerName="registry" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.296359 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c9b9f7-1d12-4e77-a47f-8cb601836611" containerName="registry" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.296752 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.299604 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.300708 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-smkbr" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.305183 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-85ml7"] Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.307194 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.314992 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ltznx"] Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.315723 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ltznx" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.321159 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pll8f" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.332718 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-b7s9n"] Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.333846 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.336408 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-sl46w" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.355414 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ltznx"] Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.359234 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-b7s9n"] Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.421666 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zldlh\" (UniqueName: \"kubernetes.io/projected/26226a5a-7c8e-4247-8441-43c981f5d894-kube-api-access-zldlh\") pod \"cert-manager-cainjector-cf98fcc89-85ml7\" (UID: \"26226a5a-7c8e-4247-8441-43c981f5d894\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.421802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7b4k\" (UniqueName: \"kubernetes.io/projected/36ad0b1f-b18e-48b1-84f2-bfe1343b1257-kube-api-access-d7b4k\") pod \"cert-manager-webhook-687f57d79b-b7s9n\" (UID: \"36ad0b1f-b18e-48b1-84f2-bfe1343b1257\") " pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.421917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbx62\" (UniqueName: \"kubernetes.io/projected/ecbb7636-0b7d-4212-99ce-b28e191b5dde-kube-api-access-lbx62\") pod \"cert-manager-858654f9db-ltznx\" (UID: \"ecbb7636-0b7d-4212-99ce-b28e191b5dde\") " pod="cert-manager/cert-manager-858654f9db-ltznx" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.522654 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7b4k\" (UniqueName: \"kubernetes.io/projected/36ad0b1f-b18e-48b1-84f2-bfe1343b1257-kube-api-access-d7b4k\") pod \"cert-manager-webhook-687f57d79b-b7s9n\" (UID: \"36ad0b1f-b18e-48b1-84f2-bfe1343b1257\") " pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.522987 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbx62\" (UniqueName: \"kubernetes.io/projected/ecbb7636-0b7d-4212-99ce-b28e191b5dde-kube-api-access-lbx62\") pod \"cert-manager-858654f9db-ltznx\" (UID: \"ecbb7636-0b7d-4212-99ce-b28e191b5dde\") " pod="cert-manager/cert-manager-858654f9db-ltznx" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.523104 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zldlh\" (UniqueName: \"kubernetes.io/projected/26226a5a-7c8e-4247-8441-43c981f5d894-kube-api-access-zldlh\") pod \"cert-manager-cainjector-cf98fcc89-85ml7\" (UID: \"26226a5a-7c8e-4247-8441-43c981f5d894\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.548580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zldlh\" (UniqueName: \"kubernetes.io/projected/26226a5a-7c8e-4247-8441-43c981f5d894-kube-api-access-zldlh\") pod \"cert-manager-cainjector-cf98fcc89-85ml7\" (UID: \"26226a5a-7c8e-4247-8441-43c981f5d894\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.548591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7b4k\" (UniqueName: \"kubernetes.io/projected/36ad0b1f-b18e-48b1-84f2-bfe1343b1257-kube-api-access-d7b4k\") pod \"cert-manager-webhook-687f57d79b-b7s9n\" (UID: \"36ad0b1f-b18e-48b1-84f2-bfe1343b1257\") " pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.554046 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbx62\" (UniqueName: \"kubernetes.io/projected/ecbb7636-0b7d-4212-99ce-b28e191b5dde-kube-api-access-lbx62\") pod \"cert-manager-858654f9db-ltznx\" (UID: \"ecbb7636-0b7d-4212-99ce-b28e191b5dde\") " pod="cert-manager/cert-manager-858654f9db-ltznx" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.624198 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.638627 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ltznx" Jan 27 18:52:58 crc kubenswrapper[4853]: I0127 18:52:58.656422 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.068971 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-85ml7"] Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.077926 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.122865 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ltznx"] Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.130549 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-b7s9n"] Jan 27 18:52:59 crc kubenswrapper[4853]: W0127 18:52:59.137015 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ad0b1f_b18e_48b1_84f2_bfe1343b1257.slice/crio-da2f8fed638d268a50a462e6482542a6fdcb249040b7410ae7616480b78a549d WatchSource:0}: Error finding container da2f8fed638d268a50a462e6482542a6fdcb249040b7410ae7616480b78a549d: Status 404 returned error can't find the container with id da2f8fed638d268a50a462e6482542a6fdcb249040b7410ae7616480b78a549d Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.193545 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" event={"ID":"26226a5a-7c8e-4247-8441-43c981f5d894","Type":"ContainerStarted","Data":"89535c02ffdb9498187c704ce835e767949be5a98966944ea23f0dcb683c9ce2"} Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.194453 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" event={"ID":"36ad0b1f-b18e-48b1-84f2-bfe1343b1257","Type":"ContainerStarted","Data":"da2f8fed638d268a50a462e6482542a6fdcb249040b7410ae7616480b78a549d"} Jan 27 18:52:59 crc kubenswrapper[4853]: I0127 18:52:59.195444 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ltznx" event={"ID":"ecbb7636-0b7d-4212-99ce-b28e191b5dde","Type":"ContainerStarted","Data":"d33f8c9118de1af1ca2651fbdc0cce07fe9abb7f4940396cb8753a7b5871bad8"} Jan 27 18:53:03 crc kubenswrapper[4853]: I0127 18:53:03.220436 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" event={"ID":"36ad0b1f-b18e-48b1-84f2-bfe1343b1257","Type":"ContainerStarted","Data":"8901f4b4a141b0b96582254e0553e770b2f0daaa571033f8357dd6f6c05fd1f3"} Jan 27 18:53:03 crc kubenswrapper[4853]: I0127 18:53:03.220762 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:53:03 crc kubenswrapper[4853]: I0127 18:53:03.223725 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ltznx" event={"ID":"ecbb7636-0b7d-4212-99ce-b28e191b5dde","Type":"ContainerStarted","Data":"085c1d5477cd339bf40417ccc29746247711adc24d83a1ef31cedd93e7a862f3"} Jan 27 18:53:03 crc kubenswrapper[4853]: I0127 18:53:03.241663 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" podStartSLOduration=1.862120743 podStartE2EDuration="5.241646821s" podCreationTimestamp="2026-01-27 18:52:58 +0000 UTC" firstStartedPulling="2026-01-27 18:52:59.141686783 +0000 UTC m=+621.604229666" lastFinishedPulling="2026-01-27 18:53:02.521212861 +0000 UTC m=+624.983755744" observedRunningTime="2026-01-27 18:53:03.240984122 +0000 UTC m=+625.703527005" watchObservedRunningTime="2026-01-27 18:53:03.241646821 +0000 UTC m=+625.704189694" Jan 27 18:53:03 crc kubenswrapper[4853]: I0127 18:53:03.259166 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ltznx" podStartSLOduration=1.863479142 podStartE2EDuration="5.259145586s" podCreationTimestamp="2026-01-27 18:52:58 +0000 UTC" firstStartedPulling="2026-01-27 18:52:59.13119849 +0000 UTC m=+621.593741373" lastFinishedPulling="2026-01-27 18:53:02.526864914 +0000 UTC m=+624.989407817" observedRunningTime="2026-01-27 18:53:03.253928746 +0000 UTC m=+625.716471629" watchObservedRunningTime="2026-01-27 18:53:03.259145586 +0000 UTC m=+625.721688459" Jan 27 18:53:04 crc kubenswrapper[4853]: I0127 18:53:04.236621 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" event={"ID":"26226a5a-7c8e-4247-8441-43c981f5d894","Type":"ContainerStarted","Data":"bdb0f33188f18aafa5af61836192e3e84cd0f79850e3c445b50953e5c10fbab5"} Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.855508 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85ml7" podStartSLOduration=5.45403742 podStartE2EDuration="9.855473859s" podCreationTimestamp="2026-01-27 18:52:58 +0000 UTC" firstStartedPulling="2026-01-27 18:52:59.077722237 +0000 UTC m=+621.540265120" lastFinishedPulling="2026-01-27 18:53:03.479158686 +0000 UTC m=+625.941701559" observedRunningTime="2026-01-27 18:53:04.263727907 +0000 UTC m=+626.726270790" watchObservedRunningTime="2026-01-27 18:53:07.855473859 +0000 UTC m=+630.318016772" Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.863411 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hdtbk"] Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864214 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864199 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="nbdb" containerID="cri-o://a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864373 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="sbdb" containerID="cri-o://4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864414 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-node" containerID="cri-o://4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864514 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-controller" containerID="cri-o://5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864410 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="northd" containerID="cri-o://3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.864467 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-acl-logging" containerID="cri-o://efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30" gracePeriod=30 Jan 27 18:53:07 crc kubenswrapper[4853]: I0127 18:53:07.905735 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" containerID="cri-o://9953aae37a35dae2e23f03ff9b1849f9b1bdcf2f8d846e3acbdc93eff3d80a34" gracePeriod=30 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.261370 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovnkube-controller/3.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.263649 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovn-acl-logging/0.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264160 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovn-controller/0.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264521 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="9953aae37a35dae2e23f03ff9b1849f9b1bdcf2f8d846e3acbdc93eff3d80a34" exitCode=0 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264551 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5" exitCode=0 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264561 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b" exitCode=0 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264571 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604" exitCode=0 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264579 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377" exitCode=0 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264568 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"9953aae37a35dae2e23f03ff9b1849f9b1bdcf2f8d846e3acbdc93eff3d80a34"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264587 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8" exitCode=0 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264632 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264640 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30" exitCode=143 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264647 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264653 4853 generic.go:334] "Generic (PLEG): container finished" podID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerID="5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7" exitCode=143 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264661 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264699 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264749 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" event={"ID":"ebbc7598-422a-43ad-ae98-88e57ec80b9c","Type":"ContainerDied","Data":"8d6f8413d913a60fd3c0220d73f447034f13dd1f2a140277cbf62700d8a164fd"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264765 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6f8413d913a60fd3c0220d73f447034f13dd1f2a140277cbf62700d8a164fd" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.264786 4853 scope.go:117] "RemoveContainer" containerID="e01e1cff07c3ff9a1112970e7831ca9dc51725bbe6dd330246fa3346bd8bb1ad" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.266443 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/2.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.266826 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/1.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.266869 4853 generic.go:334] "Generic (PLEG): container finished" podID="dd2c07de-2ac9-4074-9fb0-519cfaf37f69" containerID="40245ed681744116d224fbfe72f4989b1d9a86abb7c0b6ccbeb606b2d243672c" exitCode=2 Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.266899 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerDied","Data":"40245ed681744116d224fbfe72f4989b1d9a86abb7c0b6ccbeb606b2d243672c"} Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.267422 4853 scope.go:117] "RemoveContainer" containerID="40245ed681744116d224fbfe72f4989b1d9a86abb7c0b6ccbeb606b2d243672c" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.267631 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-w4d5n_openshift-multus(dd2c07de-2ac9-4074-9fb0-519cfaf37f69)\"" pod="openshift-multus/multus-w4d5n" podUID="dd2c07de-2ac9-4074-9fb0-519cfaf37f69" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.278528 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovn-acl-logging/0.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.279130 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovn-controller/0.log" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.280162 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.301335 4853 scope.go:117] "RemoveContainer" containerID="d7df211c586c12b9dbadf6a48722a3059e65f42e0c70cf73a6e197091983980c" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.344835 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t57v8"] Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345060 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345074 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345090 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345102 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345139 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-acl-logging" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345146 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-acl-logging" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345154 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="northd" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345161 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="northd" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345171 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="sbdb" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345176 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="sbdb" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345186 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kubecfg-setup" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345192 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kubecfg-setup" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345200 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345205 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345212 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345218 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345224 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345229 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345236 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-node" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345241 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-node" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345252 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345258 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345266 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="nbdb" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345271 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="nbdb" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345358 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345366 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345375 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345382 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="kube-rbac-proxy-node" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345391 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="sbdb" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345399 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="nbdb" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345407 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345414 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345431 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345440 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovn-acl-logging" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345450 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="northd" Jan 27 18:53:08 crc kubenswrapper[4853]: E0127 18:53:08.345537 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345547 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.345644 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" containerName="ovnkube-controller" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.347053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395599 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-netns\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-kubelet\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395693 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-config\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395719 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-systemd\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395744 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-script-lib\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395736 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395783 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq4vs\" (UniqueName: \"kubernetes.io/projected/ebbc7598-422a-43ad-ae98-88e57ec80b9c-kube-api-access-cq4vs\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395754 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395827 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovn-node-metrics-cert\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395859 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-node-log\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395890 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-netd\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395929 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-var-lib-openvswitch\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395961 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-slash\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.395987 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-etc-openvswitch\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396010 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-log-socket\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396012 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-node-log" (OuterVolumeSpecName: "node-log") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396047 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-ovn\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396086 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-ovn-kubernetes\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396113 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-env-overrides\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396175 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-openvswitch\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396192 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396196 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396263 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396301 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-bin\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396327 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-systemd-units\") pod \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\" (UID: \"ebbc7598-422a-43ad-ae98-88e57ec80b9c\") " Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396305 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396323 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-slash" (OuterVolumeSpecName: "host-slash") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396360 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396754 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396374 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-log-socket" (OuterVolumeSpecName: "log-socket") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396388 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396786 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396800 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396401 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.396704 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397256 4853 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397278 4853 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397292 4853 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397307 4853 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397318 4853 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397330 4853 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397341 4853 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397352 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397364 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397376 4853 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397388 4853 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397400 4853 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397411 4853 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397422 4853 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397433 4853 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397446 4853 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.397491 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.402546 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbc7598-422a-43ad-ae98-88e57ec80b9c-kube-api-access-cq4vs" (OuterVolumeSpecName: "kube-api-access-cq4vs") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "kube-api-access-cq4vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.403975 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.409270 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ebbc7598-422a-43ad-ae98-88e57ec80b9c" (UID: "ebbc7598-422a-43ad-ae98-88e57ec80b9c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498587 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovnkube-config\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498637 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-run-netns\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498688 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-cni-netd\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-etc-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498759 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-ovn\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-kubelet\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498860 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-node-log\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498892 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovnkube-script-lib\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498912 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-var-lib-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.498993 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovn-node-metrics-cert\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-env-overrides\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499047 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-systemd-units\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-slash\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499078 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-systemd\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499091 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-log-socket\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499106 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-run-ovn-kubernetes\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499198 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-cni-bin\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499314 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qntb6\" (UniqueName: \"kubernetes.io/projected/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-kube-api-access-qntb6\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499413 4853 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ebbc7598-422a-43ad-ae98-88e57ec80b9c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499433 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq4vs\" (UniqueName: \"kubernetes.io/projected/ebbc7598-422a-43ad-ae98-88e57ec80b9c-kube-api-access-cq4vs\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499448 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ebbc7598-422a-43ad-ae98-88e57ec80b9c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.499465 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ebbc7598-422a-43ad-ae98-88e57ec80b9c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-var-lib-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600617 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovn-node-metrics-cert\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600641 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-env-overrides\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600674 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-systemd-units\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600746 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-systemd-units\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-var-lib-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600932 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-slash\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601275 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-env-overrides\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.600693 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-slash\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-systemd\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601390 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-log-socket\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601407 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-run-ovn-kubernetes\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-cni-bin\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601447 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qntb6\" (UniqueName: \"kubernetes.io/projected/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-kube-api-access-qntb6\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601465 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovnkube-config\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601480 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-run-netns\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-cni-netd\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-etc-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601549 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-ovn\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-kubelet\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601588 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-node-log\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601619 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovnkube-script-lib\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601752 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-run-netns\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601778 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-systemd\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601797 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-log-socket\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601818 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-run-ovn-kubernetes\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601843 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-cni-bin\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601853 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-cni-netd\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601924 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-etc-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601954 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-ovn\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.601988 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-host-kubelet\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.602014 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-run-openvswitch\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.602044 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-node-log\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.602621 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovnkube-script-lib\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.602867 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovnkube-config\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.604508 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-ovn-node-metrics-cert\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.630205 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qntb6\" (UniqueName: \"kubernetes.io/projected/10cda451-2f9e-4aad-90d2-f4ce2b15beeb-kube-api-access-qntb6\") pod \"ovnkube-node-t57v8\" (UID: \"10cda451-2f9e-4aad-90d2-f4ce2b15beeb\") " pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.660248 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-b7s9n" Jan 27 18:53:08 crc kubenswrapper[4853]: I0127 18:53:08.668997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.274488 4853 generic.go:334] "Generic (PLEG): container finished" podID="10cda451-2f9e-4aad-90d2-f4ce2b15beeb" containerID="d75ed82f0f0d16a7f67970aa6355e1ea05ea06f69c7b52066c4040903ae24aad" exitCode=0 Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.274545 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerDied","Data":"d75ed82f0f0d16a7f67970aa6355e1ea05ea06f69c7b52066c4040903ae24aad"} Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.275928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"07f8809bdb5770b0fe44385bf83a28cc0409d344cec3158d005f01132697bb57"} Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.279540 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/2.log" Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.284379 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovn-acl-logging/0.log" Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.285052 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hdtbk_ebbc7598-422a-43ad-ae98-88e57ec80b9c/ovn-controller/0.log" Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.285642 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hdtbk" Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.352884 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hdtbk"] Jan 27 18:53:09 crc kubenswrapper[4853]: I0127 18:53:09.357185 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hdtbk"] Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.121009 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbc7598-422a-43ad-ae98-88e57ec80b9c" path="/var/lib/kubelet/pods/ebbc7598-422a-43ad-ae98-88e57ec80b9c/volumes" Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.296296 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"117e8e27371d663b6affb6c15e40269d72b84a597b1e18215e5d0862ee3607b7"} Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.296398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"18bce191ec1b65c127ab8f1a74b533006db3c11bc0c146d92dd8327115310c37"} Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.296435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"0a291deb22b9bcb37837f4ab57557fd9a7f6952fc08dd786c0f72d96d7ec7b8b"} Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.296458 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"9b12577a64abeb81a0b429d4f237a11d13768e44f85856b622708dae4ccd672a"} Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.296484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"c64a17dedc8b3cdfdcc8abc03c3220c844378db3627a3d2eb68be0c59b013b39"} Jan 27 18:53:10 crc kubenswrapper[4853]: I0127 18:53:10.296505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"23e103dc7d1035094463a84344b96746fd9248bfcd59540e81f8214415c40ec5"} Jan 27 18:53:12 crc kubenswrapper[4853]: I0127 18:53:12.311571 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"1bf6d37ef888f32b30a49283063f2564f6059c3925f5e3f02d5f6f8d7726c605"} Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.338891 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" event={"ID":"10cda451-2f9e-4aad-90d2-f4ce2b15beeb","Type":"ContainerStarted","Data":"e35a27036d1ac31c2ddb387366aa845c1bcb50343772ecafa0038930f06b44a2"} Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.339903 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.339922 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.339936 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.373572 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.375431 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:15 crc kubenswrapper[4853]: I0127 18:53:15.378749 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" podStartSLOduration=7.3787246 podStartE2EDuration="7.3787246s" podCreationTimestamp="2026-01-27 18:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:53:15.374651312 +0000 UTC m=+637.837194195" watchObservedRunningTime="2026-01-27 18:53:15.3787246 +0000 UTC m=+637.841267483" Jan 27 18:53:22 crc kubenswrapper[4853]: I0127 18:53:22.112767 4853 scope.go:117] "RemoveContainer" containerID="40245ed681744116d224fbfe72f4989b1d9a86abb7c0b6ccbeb606b2d243672c" Jan 27 18:53:22 crc kubenswrapper[4853]: E0127 18:53:22.114158 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-w4d5n_openshift-multus(dd2c07de-2ac9-4074-9fb0-519cfaf37f69)\"" pod="openshift-multus/multus-w4d5n" podUID="dd2c07de-2ac9-4074-9fb0-519cfaf37f69" Jan 27 18:53:36 crc kubenswrapper[4853]: I0127 18:53:36.113646 4853 scope.go:117] "RemoveContainer" containerID="40245ed681744116d224fbfe72f4989b1d9a86abb7c0b6ccbeb606b2d243672c" Jan 27 18:53:36 crc kubenswrapper[4853]: I0127 18:53:36.476689 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w4d5n_dd2c07de-2ac9-4074-9fb0-519cfaf37f69/kube-multus/2.log" Jan 27 18:53:36 crc kubenswrapper[4853]: I0127 18:53:36.477275 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w4d5n" event={"ID":"dd2c07de-2ac9-4074-9fb0-519cfaf37f69","Type":"ContainerStarted","Data":"80943ac5d64bd18873e5607526b4a888eecf590ad744b01bae13d728c61d7dc7"} Jan 27 18:53:38 crc kubenswrapper[4853]: I0127 18:53:38.693890 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t57v8" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.207088 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl"] Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.210190 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.212113 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.220003 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl"] Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.296242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-kube-api-access-chrbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.296283 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.296315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.397863 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-kube-api-access-chrbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.397917 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.397951 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.398415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.398548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.418003 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-kube-api-access-chrbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.524210 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.713982 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl"] Jan 27 18:53:50 crc kubenswrapper[4853]: W0127 18:53:50.719157 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod495b4ff2_7320_4ab3_b6d6_79c5d575cfe4.slice/crio-79c7e2f28077ee24c61456cdeef18f7f67d23d4e083a674fed8ac386d07484bd WatchSource:0}: Error finding container 79c7e2f28077ee24c61456cdeef18f7f67d23d4e083a674fed8ac386d07484bd: Status 404 returned error can't find the container with id 79c7e2f28077ee24c61456cdeef18f7f67d23d4e083a674fed8ac386d07484bd Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.948051 4853 scope.go:117] "RemoveContainer" containerID="4da02162adae947a3ab62fcbeba04da031f5189c42947da27ec21df5a480b4b5" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.960743 4853 scope.go:117] "RemoveContainer" containerID="c8de5b4d8d6553f77b012954fddfcb337c9b25ba98d94ef27831b50f63672377" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.975585 4853 scope.go:117] "RemoveContainer" containerID="4a5e0da6c76e9510cda57fa243b0a721d160745a63e88a9aa736807af73864d8" Jan 27 18:53:50 crc kubenswrapper[4853]: I0127 18:53:50.989424 4853 scope.go:117] "RemoveContainer" containerID="a7937ea08bd25bed35d9386a8c870c88ff3f58eeec1ba1a2c55bdfa260017f9b" Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.003346 4853 scope.go:117] "RemoveContainer" containerID="efa308f95f35833395528dbe46b9e3d8f25800c18126c75d3db793f9c7945d30" Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.015314 4853 scope.go:117] "RemoveContainer" containerID="3f8095ca05481aa2d17d10ae848c2d052452f3bfa83b6ac23a75d0f59d84a604" Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.027269 4853 scope.go:117] "RemoveContainer" containerID="5a23ced79c532f6fcb0f4efcf743b934f7640deb3a7b1b879032416ee2c9b8d7" Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.040584 4853 scope.go:117] "RemoveContainer" containerID="9953aae37a35dae2e23f03ff9b1849f9b1bdcf2f8d846e3acbdc93eff3d80a34" Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.056043 4853 scope.go:117] "RemoveContainer" containerID="e81785fa8e88ca85b42840c9f11efd5774320c7b13588e01695e428426ecda4d" Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.559815 4853 generic.go:334] "Generic (PLEG): container finished" podID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerID="bc89790b607a4bb08d4b2a525588f19039c94a816a89edcdbe5e5d93438bd804" exitCode=0 Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.559869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" event={"ID":"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4","Type":"ContainerDied","Data":"bc89790b607a4bb08d4b2a525588f19039c94a816a89edcdbe5e5d93438bd804"} Jan 27 18:53:51 crc kubenswrapper[4853]: I0127 18:53:51.560246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" event={"ID":"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4","Type":"ContainerStarted","Data":"79c7e2f28077ee24c61456cdeef18f7f67d23d4e083a674fed8ac386d07484bd"} Jan 27 18:53:53 crc kubenswrapper[4853]: I0127 18:53:53.577201 4853 generic.go:334] "Generic (PLEG): container finished" podID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerID="421c7c2f1812bbd124fe68966133295cf38413eeb2f76621e5b454a88fa47124" exitCode=0 Jan 27 18:53:53 crc kubenswrapper[4853]: I0127 18:53:53.577258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" event={"ID":"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4","Type":"ContainerDied","Data":"421c7c2f1812bbd124fe68966133295cf38413eeb2f76621e5b454a88fa47124"} Jan 27 18:53:54 crc kubenswrapper[4853]: I0127 18:53:54.584791 4853 generic.go:334] "Generic (PLEG): container finished" podID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerID="61d9c2beb225c734bebefd32d4f1ea2f61dfb055d0d62fbf2a751222e3f2b1d0" exitCode=0 Jan 27 18:53:54 crc kubenswrapper[4853]: I0127 18:53:54.584854 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" event={"ID":"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4","Type":"ContainerDied","Data":"61d9c2beb225c734bebefd32d4f1ea2f61dfb055d0d62fbf2a751222e3f2b1d0"} Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.873433 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.971711 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-kube-api-access-chrbt\") pod \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.971766 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-util\") pod \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.971910 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-bundle\") pod \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\" (UID: \"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4\") " Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.972948 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-bundle" (OuterVolumeSpecName: "bundle") pod "495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" (UID: "495b4ff2-7320-4ab3-b6d6-79c5d575cfe4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.978024 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-kube-api-access-chrbt" (OuterVolumeSpecName: "kube-api-access-chrbt") pod "495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" (UID: "495b4ff2-7320-4ab3-b6d6-79c5d575cfe4"). InnerVolumeSpecName "kube-api-access-chrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:53:55 crc kubenswrapper[4853]: I0127 18:53:55.986532 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-util" (OuterVolumeSpecName: "util") pod "495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" (UID: "495b4ff2-7320-4ab3-b6d6-79c5d575cfe4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4853]: I0127 18:53:56.073110 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4853]: I0127 18:53:56.073184 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chrbt\" (UniqueName: \"kubernetes.io/projected/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-kube-api-access-chrbt\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4853]: I0127 18:53:56.073195 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/495b4ff2-7320-4ab3-b6d6-79c5d575cfe4-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4853]: I0127 18:53:56.597772 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" event={"ID":"495b4ff2-7320-4ab3-b6d6-79c5d575cfe4","Type":"ContainerDied","Data":"79c7e2f28077ee24c61456cdeef18f7f67d23d4e083a674fed8ac386d07484bd"} Jan 27 18:53:56 crc kubenswrapper[4853]: I0127 18:53:56.598107 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c7e2f28077ee24c61456cdeef18f7f67d23d4e083a674fed8ac386d07484bd" Jan 27 18:53:56 crc kubenswrapper[4853]: I0127 18:53:56.597812 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.276216 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-mj8nh"] Jan 27 18:53:58 crc kubenswrapper[4853]: E0127 18:53:58.276409 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="extract" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.276419 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="extract" Jan 27 18:53:58 crc kubenswrapper[4853]: E0127 18:53:58.276427 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="util" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.276433 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="util" Jan 27 18:53:58 crc kubenswrapper[4853]: E0127 18:53:58.276448 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="pull" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.276454 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="pull" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.276538 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="495b4ff2-7320-4ab3-b6d6-79c5d575cfe4" containerName="extract" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.276855 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.278942 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-k84p5" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.279363 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.283390 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.293270 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-mj8nh"] Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.400431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dth5n\" (UniqueName: \"kubernetes.io/projected/a903dd65-5d9d-48da-b24d-d9ae9ad3a734-kube-api-access-dth5n\") pod \"nmstate-operator-646758c888-mj8nh\" (UID: \"a903dd65-5d9d-48da-b24d-d9ae9ad3a734\") " pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.501716 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dth5n\" (UniqueName: \"kubernetes.io/projected/a903dd65-5d9d-48da-b24d-d9ae9ad3a734-kube-api-access-dth5n\") pod \"nmstate-operator-646758c888-mj8nh\" (UID: \"a903dd65-5d9d-48da-b24d-d9ae9ad3a734\") " pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.520413 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dth5n\" (UniqueName: \"kubernetes.io/projected/a903dd65-5d9d-48da-b24d-d9ae9ad3a734-kube-api-access-dth5n\") pod \"nmstate-operator-646758c888-mj8nh\" (UID: \"a903dd65-5d9d-48da-b24d-d9ae9ad3a734\") " pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.593721 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" Jan 27 18:53:58 crc kubenswrapper[4853]: I0127 18:53:58.838928 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-mj8nh"] Jan 27 18:53:59 crc kubenswrapper[4853]: I0127 18:53:59.611493 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" event={"ID":"a903dd65-5d9d-48da-b24d-d9ae9ad3a734","Type":"ContainerStarted","Data":"7bdc958d55ea633779d98e3acab74b3b01d245bfbb5cdca78cb18d3b247006a4"} Jan 27 18:54:02 crc kubenswrapper[4853]: I0127 18:54:02.627196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" event={"ID":"a903dd65-5d9d-48da-b24d-d9ae9ad3a734","Type":"ContainerStarted","Data":"698a92992f942d4da7dfbd87a1ecedae5f7dfd58c76ef61d808b6890512b4985"} Jan 27 18:54:02 crc kubenswrapper[4853]: I0127 18:54:02.643390 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-mj8nh" podStartSLOduration=1.229749617 podStartE2EDuration="4.643374779s" podCreationTimestamp="2026-01-27 18:53:58 +0000 UTC" firstStartedPulling="2026-01-27 18:53:58.854509898 +0000 UTC m=+681.317052781" lastFinishedPulling="2026-01-27 18:54:02.26813506 +0000 UTC m=+684.730677943" observedRunningTime="2026-01-27 18:54:02.64271735 +0000 UTC m=+685.105260233" watchObservedRunningTime="2026-01-27 18:54:02.643374779 +0000 UTC m=+685.105917662" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.667576 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dl7m9"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.668884 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.670462 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vf468" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.675293 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.675924 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.677063 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.692788 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dl7m9"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.696213 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.751472 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-t77h4"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.752487 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.758969 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvgdr\" (UniqueName: \"kubernetes.io/projected/9fd55339-43c5-45d5-9789-2f69da655baf-kube-api-access-xvgdr\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.759237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsw7h\" (UniqueName: \"kubernetes.io/projected/b5d820ba-3b41-444c-b92b-1754909e56a0-kube-api-access-tsw7h\") pod \"nmstate-metrics-54757c584b-dl7m9\" (UID: \"b5d820ba-3b41-444c-b92b-1754909e56a0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.759432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd55339-43c5-45d5-9789-2f69da655baf-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861245 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvgdr\" (UniqueName: \"kubernetes.io/projected/9fd55339-43c5-45d5-9789-2f69da655baf-kube-api-access-xvgdr\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861295 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsw7h\" (UniqueName: \"kubernetes.io/projected/b5d820ba-3b41-444c-b92b-1754909e56a0-kube-api-access-tsw7h\") pod \"nmstate-metrics-54757c584b-dl7m9\" (UID: \"b5d820ba-3b41-444c-b92b-1754909e56a0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzw56\" (UniqueName: \"kubernetes.io/projected/9f0d7951-c2e9-4857-a367-2426f842e3af-kube-api-access-vzw56\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861407 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-nmstate-lock\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861431 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd55339-43c5-45d5-9789-2f69da655baf-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-dbus-socket\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.861613 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-ovs-socket\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: E0127 18:54:03.861609 4853 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 18:54:03 crc kubenswrapper[4853]: E0127 18:54:03.861679 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fd55339-43c5-45d5-9789-2f69da655baf-tls-key-pair podName:9fd55339-43c5-45d5-9789-2f69da655baf nodeName:}" failed. No retries permitted until 2026-01-27 18:54:04.361658487 +0000 UTC m=+686.824201370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9fd55339-43c5-45d5-9789-2f69da655baf-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-znknt" (UID: "9fd55339-43c5-45d5-9789-2f69da655baf") : secret "openshift-nmstate-webhook" not found Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.862147 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.862878 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.865368 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.865380 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.865628 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dxgxf" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.874229 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d"] Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.884006 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsw7h\" (UniqueName: \"kubernetes.io/projected/b5d820ba-3b41-444c-b92b-1754909e56a0-kube-api-access-tsw7h\") pod \"nmstate-metrics-54757c584b-dl7m9\" (UID: \"b5d820ba-3b41-444c-b92b-1754909e56a0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.886909 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvgdr\" (UniqueName: \"kubernetes.io/projected/9fd55339-43c5-45d5-9789-2f69da655baf-kube-api-access-xvgdr\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-ovs-socket\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966771 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjqn\" (UniqueName: \"kubernetes.io/projected/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-kube-api-access-dbjqn\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966795 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzw56\" (UniqueName: \"kubernetes.io/projected/9f0d7951-c2e9-4857-a367-2426f842e3af-kube-api-access-vzw56\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966849 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-nmstate-lock\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.966909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-dbus-socket\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.967295 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-dbus-socket\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.967350 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-ovs-socket\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.967617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9f0d7951-c2e9-4857-a367-2426f842e3af-nmstate-lock\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.988159 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" Jan 27 18:54:03 crc kubenswrapper[4853]: I0127 18:54:03.997904 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzw56\" (UniqueName: \"kubernetes.io/projected/9f0d7951-c2e9-4857-a367-2426f842e3af-kube-api-access-vzw56\") pod \"nmstate-handler-t77h4\" (UID: \"9f0d7951-c2e9-4857-a367-2426f842e3af\") " pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.067497 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.067865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.067991 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbjqn\" (UniqueName: \"kubernetes.io/projected/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-kube-api-access-dbjqn\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.068033 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.069352 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.072101 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.085844 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67bf585c5f-lvd9w"] Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.087013 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.097676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbjqn\" (UniqueName: \"kubernetes.io/projected/fdac6187-5b6f-4375-a09a-42efb7d0eaf6-kube-api-access-dbjqn\") pod \"nmstate-console-plugin-7754f76f8b-bnd9d\" (UID: \"fdac6187-5b6f-4375-a09a-42efb7d0eaf6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.122067 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bf585c5f-lvd9w"] Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169236 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-console-config\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169589 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-oauth-serving-cert\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169626 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-trusted-ca-bundle\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169653 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-service-ca\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169676 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/26e4267d-7248-4639-8a8f-93b8ca307c01-console-oauth-config\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169812 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mth\" (UniqueName: \"kubernetes.io/projected/26e4267d-7248-4639-8a8f-93b8ca307c01-kube-api-access-p2mth\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.169861 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/26e4267d-7248-4639-8a8f-93b8ca307c01-console-serving-cert\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.181389 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270743 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mth\" (UniqueName: \"kubernetes.io/projected/26e4267d-7248-4639-8a8f-93b8ca307c01-kube-api-access-p2mth\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/26e4267d-7248-4639-8a8f-93b8ca307c01-console-serving-cert\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270834 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-oauth-serving-cert\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270851 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-console-config\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270870 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-trusted-ca-bundle\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270889 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-service-ca\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.270903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/26e4267d-7248-4639-8a8f-93b8ca307c01-console-oauth-config\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.271932 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-service-ca\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.271932 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-oauth-serving-cert\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.272038 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-console-config\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.273563 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e4267d-7248-4639-8a8f-93b8ca307c01-trusted-ca-bundle\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.274723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/26e4267d-7248-4639-8a8f-93b8ca307c01-console-serving-cert\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.274791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/26e4267d-7248-4639-8a8f-93b8ca307c01-console-oauth-config\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.288232 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mth\" (UniqueName: \"kubernetes.io/projected/26e4267d-7248-4639-8a8f-93b8ca307c01-kube-api-access-p2mth\") pod \"console-67bf585c5f-lvd9w\" (UID: \"26e4267d-7248-4639-8a8f-93b8ca307c01\") " pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.372286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd55339-43c5-45d5-9789-2f69da655baf-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.374899 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd55339-43c5-45d5-9789-2f69da655baf-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-znknt\" (UID: \"9fd55339-43c5-45d5-9789-2f69da655baf\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.377838 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d"] Jan 27 18:54:04 crc kubenswrapper[4853]: W0127 18:54:04.382800 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdac6187_5b6f_4375_a09a_42efb7d0eaf6.slice/crio-986363e2794ae5b6c089def2591159e258cac34928d1c0339d1b3c8456af5d65 WatchSource:0}: Error finding container 986363e2794ae5b6c089def2591159e258cac34928d1c0339d1b3c8456af5d65: Status 404 returned error can't find the container with id 986363e2794ae5b6c089def2591159e258cac34928d1c0339d1b3c8456af5d65 Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.420776 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.446495 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dl7m9"] Jan 27 18:54:04 crc kubenswrapper[4853]: W0127 18:54:04.462469 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5d820ba_3b41_444c_b92b_1754909e56a0.slice/crio-75e1fb79c00952d5a9245ab0e2f4a53cbb285c1efb940a1d4642b9f8fee73faa WatchSource:0}: Error finding container 75e1fb79c00952d5a9245ab0e2f4a53cbb285c1efb940a1d4642b9f8fee73faa: Status 404 returned error can't find the container with id 75e1fb79c00952d5a9245ab0e2f4a53cbb285c1efb940a1d4642b9f8fee73faa Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.606983 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bf585c5f-lvd9w"] Jan 27 18:54:04 crc kubenswrapper[4853]: W0127 18:54:04.611492 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e4267d_7248_4639_8a8f_93b8ca307c01.slice/crio-59ebd4f3697db22db42da3dd2e7ec3f2c77778102889df772f508e3249c688c4 WatchSource:0}: Error finding container 59ebd4f3697db22db42da3dd2e7ec3f2c77778102889df772f508e3249c688c4: Status 404 returned error can't find the container with id 59ebd4f3697db22db42da3dd2e7ec3f2c77778102889df772f508e3249c688c4 Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.638081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bf585c5f-lvd9w" event={"ID":"26e4267d-7248-4639-8a8f-93b8ca307c01","Type":"ContainerStarted","Data":"59ebd4f3697db22db42da3dd2e7ec3f2c77778102889df772f508e3249c688c4"} Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.639879 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-t77h4" event={"ID":"9f0d7951-c2e9-4857-a367-2426f842e3af","Type":"ContainerStarted","Data":"f5fd0b46e96f2bb0b08af089eff201e77c21e853a3dbd78462cf0b90035fe471"} Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.641108 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" event={"ID":"b5d820ba-3b41-444c-b92b-1754909e56a0","Type":"ContainerStarted","Data":"75e1fb79c00952d5a9245ab0e2f4a53cbb285c1efb940a1d4642b9f8fee73faa"} Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.642173 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" event={"ID":"fdac6187-5b6f-4375-a09a-42efb7d0eaf6","Type":"ContainerStarted","Data":"986363e2794ae5b6c089def2591159e258cac34928d1c0339d1b3c8456af5d65"} Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.648765 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:04 crc kubenswrapper[4853]: I0127 18:54:04.829872 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt"] Jan 27 18:54:04 crc kubenswrapper[4853]: W0127 18:54:04.836982 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fd55339_43c5_45d5_9789_2f69da655baf.slice/crio-c3731ff789b90dfce6ee26426dcfe564594f20d0ae24d9ff7db1e3ebe59d0ad5 WatchSource:0}: Error finding container c3731ff789b90dfce6ee26426dcfe564594f20d0ae24d9ff7db1e3ebe59d0ad5: Status 404 returned error can't find the container with id c3731ff789b90dfce6ee26426dcfe564594f20d0ae24d9ff7db1e3ebe59d0ad5 Jan 27 18:54:05 crc kubenswrapper[4853]: I0127 18:54:05.648591 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bf585c5f-lvd9w" event={"ID":"26e4267d-7248-4639-8a8f-93b8ca307c01","Type":"ContainerStarted","Data":"2015f850b35048d506613bbd6c03a1e3d7f1e64d6deaafa8581f364dd61d8b6e"} Jan 27 18:54:05 crc kubenswrapper[4853]: I0127 18:54:05.653923 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" event={"ID":"9fd55339-43c5-45d5-9789-2f69da655baf","Type":"ContainerStarted","Data":"c3731ff789b90dfce6ee26426dcfe564594f20d0ae24d9ff7db1e3ebe59d0ad5"} Jan 27 18:54:05 crc kubenswrapper[4853]: I0127 18:54:05.665110 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67bf585c5f-lvd9w" podStartSLOduration=1.6650936619999999 podStartE2EDuration="1.665093662s" podCreationTimestamp="2026-01-27 18:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:54:05.664841765 +0000 UTC m=+688.127384648" watchObservedRunningTime="2026-01-27 18:54:05.665093662 +0000 UTC m=+688.127636545" Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.667664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-t77h4" event={"ID":"9f0d7951-c2e9-4857-a367-2426f842e3af","Type":"ContainerStarted","Data":"f678c04782a0afe6bb1ec0c6fb9ef71a10effcddf17bbf2dc729e68fb3fb3768"} Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.668504 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.668857 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" event={"ID":"b5d820ba-3b41-444c-b92b-1754909e56a0","Type":"ContainerStarted","Data":"a85f368f762e59ace04f8e1cf71f4e82d0da7eb12e4159b7bcd25cb48d46f126"} Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.670078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" event={"ID":"fdac6187-5b6f-4375-a09a-42efb7d0eaf6","Type":"ContainerStarted","Data":"8e12a72175e5c8283df8eb2c6aa48a656bb9996d205d729936dac99b3d19057f"} Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.671702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" event={"ID":"9fd55339-43c5-45d5-9789-2f69da655baf","Type":"ContainerStarted","Data":"2496b791e4ceb1a1c72d0b70b634d489b87c3aa9c4785cd90faf443a85308977"} Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.671865 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.689640 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-t77h4" podStartSLOduration=1.480883827 podStartE2EDuration="4.689618056s" podCreationTimestamp="2026-01-27 18:54:03 +0000 UTC" firstStartedPulling="2026-01-27 18:54:04.132765101 +0000 UTC m=+686.595307984" lastFinishedPulling="2026-01-27 18:54:07.34149933 +0000 UTC m=+689.804042213" observedRunningTime="2026-01-27 18:54:07.686020762 +0000 UTC m=+690.148563645" watchObservedRunningTime="2026-01-27 18:54:07.689618056 +0000 UTC m=+690.152160949" Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.704437 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" podStartSLOduration=2.173954679 podStartE2EDuration="4.704410753s" podCreationTimestamp="2026-01-27 18:54:03 +0000 UTC" firstStartedPulling="2026-01-27 18:54:04.839790905 +0000 UTC m=+687.302333788" lastFinishedPulling="2026-01-27 18:54:07.370246969 +0000 UTC m=+689.832789862" observedRunningTime="2026-01-27 18:54:07.70153301 +0000 UTC m=+690.164075903" watchObservedRunningTime="2026-01-27 18:54:07.704410753 +0000 UTC m=+690.166953636" Jan 27 18:54:07 crc kubenswrapper[4853]: I0127 18:54:07.719434 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bnd9d" podStartSLOduration=1.762907666 podStartE2EDuration="4.719413836s" podCreationTimestamp="2026-01-27 18:54:03 +0000 UTC" firstStartedPulling="2026-01-27 18:54:04.38499488 +0000 UTC m=+686.847537763" lastFinishedPulling="2026-01-27 18:54:07.34150105 +0000 UTC m=+689.804043933" observedRunningTime="2026-01-27 18:54:07.715883164 +0000 UTC m=+690.178426047" watchObservedRunningTime="2026-01-27 18:54:07.719413836 +0000 UTC m=+690.181956709" Jan 27 18:54:10 crc kubenswrapper[4853]: I0127 18:54:10.692315 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" event={"ID":"b5d820ba-3b41-444c-b92b-1754909e56a0","Type":"ContainerStarted","Data":"47fc8d6832ead20e26d6f7c0eeb8d144c8b77a3766ad3875aa07de8bd29eea1a"} Jan 27 18:54:10 crc kubenswrapper[4853]: I0127 18:54:10.711080 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-dl7m9" podStartSLOduration=1.8888073890000001 podStartE2EDuration="7.71106115s" podCreationTimestamp="2026-01-27 18:54:03 +0000 UTC" firstStartedPulling="2026-01-27 18:54:04.465353509 +0000 UTC m=+686.927896392" lastFinishedPulling="2026-01-27 18:54:10.28760727 +0000 UTC m=+692.750150153" observedRunningTime="2026-01-27 18:54:10.707509418 +0000 UTC m=+693.170052301" watchObservedRunningTime="2026-01-27 18:54:10.71106115 +0000 UTC m=+693.173604033" Jan 27 18:54:14 crc kubenswrapper[4853]: I0127 18:54:14.102892 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-t77h4" Jan 27 18:54:14 crc kubenswrapper[4853]: I0127 18:54:14.421643 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:14 crc kubenswrapper[4853]: I0127 18:54:14.421688 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:14 crc kubenswrapper[4853]: I0127 18:54:14.425731 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:14 crc kubenswrapper[4853]: I0127 18:54:14.724424 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67bf585c5f-lvd9w" Jan 27 18:54:14 crc kubenswrapper[4853]: I0127 18:54:14.780984 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9vd4d"] Jan 27 18:54:24 crc kubenswrapper[4853]: I0127 18:54:24.656077 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-znknt" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.541143 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.541654 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.579269 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7"] Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.580926 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.582831 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.589404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7"] Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.683021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.683093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.683200 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m98b\" (UniqueName: \"kubernetes.io/projected/13e74d47-44ea-4d71-abca-c805139dc4a9-kube-api-access-6m98b\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.784955 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m98b\" (UniqueName: \"kubernetes.io/projected/13e74d47-44ea-4d71-abca-c805139dc4a9-kube-api-access-6m98b\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.785058 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.785107 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.785650 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.785695 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.805362 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m98b\" (UniqueName: \"kubernetes.io/projected/13e74d47-44ea-4d71-abca-c805139dc4a9-kube-api-access-6m98b\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:35 crc kubenswrapper[4853]: I0127 18:54:35.901245 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:36 crc kubenswrapper[4853]: I0127 18:54:36.290551 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7"] Jan 27 18:54:36 crc kubenswrapper[4853]: I0127 18:54:36.840501 4853 generic.go:334] "Generic (PLEG): container finished" podID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerID="572813b43d0320e3b57dd014bc1fb1956d1840438774eab70d55fd3b68806654" exitCode=0 Jan 27 18:54:36 crc kubenswrapper[4853]: I0127 18:54:36.840885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" event={"ID":"13e74d47-44ea-4d71-abca-c805139dc4a9","Type":"ContainerDied","Data":"572813b43d0320e3b57dd014bc1fb1956d1840438774eab70d55fd3b68806654"} Jan 27 18:54:36 crc kubenswrapper[4853]: I0127 18:54:36.841544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" event={"ID":"13e74d47-44ea-4d71-abca-c805139dc4a9","Type":"ContainerStarted","Data":"263fca588af92b2086a66255a6d0e1b99d697baf3ff6fc3c39390f611b0dc962"} Jan 27 18:54:38 crc kubenswrapper[4853]: I0127 18:54:38.859094 4853 generic.go:334] "Generic (PLEG): container finished" podID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerID="86d97c6fc606ddbbd604eeeba406891988d543358f4019b8ebd775ead8cc31c0" exitCode=0 Jan 27 18:54:38 crc kubenswrapper[4853]: I0127 18:54:38.859309 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" event={"ID":"13e74d47-44ea-4d71-abca-c805139dc4a9","Type":"ContainerDied","Data":"86d97c6fc606ddbbd604eeeba406891988d543358f4019b8ebd775ead8cc31c0"} Jan 27 18:54:39 crc kubenswrapper[4853]: I0127 18:54:39.822084 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9vd4d" podUID="2bd6e097-af15-41a1-9ab2-a4e79adef815" containerName="console" containerID="cri-o://2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8" gracePeriod=15 Jan 27 18:54:39 crc kubenswrapper[4853]: I0127 18:54:39.869759 4853 generic.go:334] "Generic (PLEG): container finished" podID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerID="48bfa8b90f665b6b77bf3d34e859e77bb5a1d1b0931bb115e9b54c626c35ce0a" exitCode=0 Jan 27 18:54:39 crc kubenswrapper[4853]: I0127 18:54:39.869818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" event={"ID":"13e74d47-44ea-4d71-abca-c805139dc4a9","Type":"ContainerDied","Data":"48bfa8b90f665b6b77bf3d34e859e77bb5a1d1b0931bb115e9b54c626c35ce0a"} Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.180324 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9vd4d_2bd6e097-af15-41a1-9ab2-a4e79adef815/console/0.log" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.180676 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237034 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-trusted-ca-bundle\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237137 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzpnv\" (UniqueName: \"kubernetes.io/projected/2bd6e097-af15-41a1-9ab2-a4e79adef815-kube-api-access-bzpnv\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237167 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-service-ca\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237191 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-serving-cert\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237211 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-oauth-serving-cert\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-oauth-config\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.237266 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-config\") pod \"2bd6e097-af15-41a1-9ab2-a4e79adef815\" (UID: \"2bd6e097-af15-41a1-9ab2-a4e79adef815\") " Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.238068 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-config" (OuterVolumeSpecName: "console-config") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.238080 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.238184 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-service-ca" (OuterVolumeSpecName: "service-ca") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.238566 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.243231 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.243388 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.243667 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd6e097-af15-41a1-9ab2-a4e79adef815-kube-api-access-bzpnv" (OuterVolumeSpecName: "kube-api-access-bzpnv") pod "2bd6e097-af15-41a1-9ab2-a4e79adef815" (UID: "2bd6e097-af15-41a1-9ab2-a4e79adef815"). InnerVolumeSpecName "kube-api-access-bzpnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.338938 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzpnv\" (UniqueName: \"kubernetes.io/projected/2bd6e097-af15-41a1-9ab2-a4e79adef815-kube-api-access-bzpnv\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.338972 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.338986 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.338999 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.339010 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.339022 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.339033 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd6e097-af15-41a1-9ab2-a4e79adef815-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.877724 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9vd4d_2bd6e097-af15-41a1-9ab2-a4e79adef815/console/0.log" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.877782 4853 generic.go:334] "Generic (PLEG): container finished" podID="2bd6e097-af15-41a1-9ab2-a4e79adef815" containerID="2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8" exitCode=2 Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.877859 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vd4d" event={"ID":"2bd6e097-af15-41a1-9ab2-a4e79adef815","Type":"ContainerDied","Data":"2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8"} Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.877881 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9vd4d" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.877901 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9vd4d" event={"ID":"2bd6e097-af15-41a1-9ab2-a4e79adef815","Type":"ContainerDied","Data":"24fbfe2fbed34ea78798d7354d51dfd8fd3503ba9e7e208c46023f4f485f8c7d"} Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.877921 4853 scope.go:117] "RemoveContainer" containerID="2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.909924 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9vd4d"] Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.915256 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9vd4d"] Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.916451 4853 scope.go:117] "RemoveContainer" containerID="2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8" Jan 27 18:54:40 crc kubenswrapper[4853]: E0127 18:54:40.917681 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8\": container with ID starting with 2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8 not found: ID does not exist" containerID="2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8" Jan 27 18:54:40 crc kubenswrapper[4853]: I0127 18:54:40.917725 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8"} err="failed to get container status \"2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8\": rpc error: code = NotFound desc = could not find container \"2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8\": container with ID starting with 2d614cf47ff72c2b9eb8ab143b180ed9addf2b99204dddbfab93868fc99f4be8 not found: ID does not exist" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.155228 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.250057 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m98b\" (UniqueName: \"kubernetes.io/projected/13e74d47-44ea-4d71-abca-c805139dc4a9-kube-api-access-6m98b\") pod \"13e74d47-44ea-4d71-abca-c805139dc4a9\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.250107 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-bundle\") pod \"13e74d47-44ea-4d71-abca-c805139dc4a9\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.250157 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-util\") pod \"13e74d47-44ea-4d71-abca-c805139dc4a9\" (UID: \"13e74d47-44ea-4d71-abca-c805139dc4a9\") " Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.251750 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-bundle" (OuterVolumeSpecName: "bundle") pod "13e74d47-44ea-4d71-abca-c805139dc4a9" (UID: "13e74d47-44ea-4d71-abca-c805139dc4a9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.256492 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e74d47-44ea-4d71-abca-c805139dc4a9-kube-api-access-6m98b" (OuterVolumeSpecName: "kube-api-access-6m98b") pod "13e74d47-44ea-4d71-abca-c805139dc4a9" (UID: "13e74d47-44ea-4d71-abca-c805139dc4a9"). InnerVolumeSpecName "kube-api-access-6m98b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.264070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-util" (OuterVolumeSpecName: "util") pod "13e74d47-44ea-4d71-abca-c805139dc4a9" (UID: "13e74d47-44ea-4d71-abca-c805139dc4a9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.352270 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m98b\" (UniqueName: \"kubernetes.io/projected/13e74d47-44ea-4d71-abca-c805139dc4a9-kube-api-access-6m98b\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.352332 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.352351 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13e74d47-44ea-4d71-abca-c805139dc4a9-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.885808 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" event={"ID":"13e74d47-44ea-4d71-abca-c805139dc4a9","Type":"ContainerDied","Data":"263fca588af92b2086a66255a6d0e1b99d697baf3ff6fc3c39390f611b0dc962"} Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.885843 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263fca588af92b2086a66255a6d0e1b99d697baf3ff6fc3c39390f611b0dc962" Jan 27 18:54:41 crc kubenswrapper[4853]: I0127 18:54:41.885873 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7" Jan 27 18:54:42 crc kubenswrapper[4853]: I0127 18:54:42.122195 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd6e097-af15-41a1-9ab2-a4e79adef815" path="/var/lib/kubelet/pods/2bd6e097-af15-41a1-9ab2-a4e79adef815/volumes" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.413840 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b"] Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.414678 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="util" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.414694 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="util" Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.414706 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="extract" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.414713 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="extract" Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.414732 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd6e097-af15-41a1-9ab2-a4e79adef815" containerName="console" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.414741 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd6e097-af15-41a1-9ab2-a4e79adef815" containerName="console" Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.414757 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="pull" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.414764 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="pull" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.414877 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd6e097-af15-41a1-9ab2-a4e79adef815" containerName="console" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.414889 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e74d47-44ea-4d71-abca-c805139dc4a9" containerName="extract" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.415402 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.417636 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.418986 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.419189 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.419285 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.419285 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bd66k" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.432795 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b"] Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.463004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92pp\" (UniqueName: \"kubernetes.io/projected/729dbe0f-d26d-4eeb-b813-e4be40033e44-kube-api-access-t92pp\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.463111 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/729dbe0f-d26d-4eeb-b813-e4be40033e44-apiservice-cert\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.463337 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/729dbe0f-d26d-4eeb-b813-e4be40033e44-webhook-cert\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.564798 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/729dbe0f-d26d-4eeb-b813-e4be40033e44-webhook-cert\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.564865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92pp\" (UniqueName: \"kubernetes.io/projected/729dbe0f-d26d-4eeb-b813-e4be40033e44-kube-api-access-t92pp\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.564903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/729dbe0f-d26d-4eeb-b813-e4be40033e44-apiservice-cert\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.570377 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/729dbe0f-d26d-4eeb-b813-e4be40033e44-apiservice-cert\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.570405 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/729dbe0f-d26d-4eeb-b813-e4be40033e44-webhook-cert\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.595936 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92pp\" (UniqueName: \"kubernetes.io/projected/729dbe0f-d26d-4eeb-b813-e4be40033e44-kube-api-access-t92pp\") pod \"metallb-operator-controller-manager-57d46b5cf6-rcn4b\" (UID: \"729dbe0f-d26d-4eeb-b813-e4be40033e44\") " pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.735935 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.762616 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b"] Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.763462 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: W0127 18:54:50.771376 4853 reflector.go:561] object-"metallb-system"/"controller-dockercfg-7rz2l": failed to list *v1.Secret: secrets "controller-dockercfg-7rz2l" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.771423 4853 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-7rz2l\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"controller-dockercfg-7rz2l\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:54:50 crc kubenswrapper[4853]: W0127 18:54:50.771503 4853 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.771515 4853 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:54:50 crc kubenswrapper[4853]: W0127 18:54:50.771737 4853 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: secrets "metallb-webhook-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 27 18:54:50 crc kubenswrapper[4853]: E0127 18:54:50.771830 4853 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-webhook-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.781148 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b"] Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.869046 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-apiservice-cert\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.869167 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-webhook-cert\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.869239 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8x4k\" (UniqueName: \"kubernetes.io/projected/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-kube-api-access-c8x4k\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.969934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8x4k\" (UniqueName: \"kubernetes.io/projected/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-kube-api-access-c8x4k\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.969984 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-apiservice-cert\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.970026 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-webhook-cert\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:50 crc kubenswrapper[4853]: I0127 18:54:50.993075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8x4k\" (UniqueName: \"kubernetes.io/projected/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-kube-api-access-c8x4k\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.284874 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b"] Jan 27 18:54:51 crc kubenswrapper[4853]: W0127 18:54:51.291107 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729dbe0f_d26d_4eeb_b813_e4be40033e44.slice/crio-ee150b4b5485a679be2a24b12c9db34bce23fcfd932270cb103b1b4397efc8d1 WatchSource:0}: Error finding container ee150b4b5485a679be2a24b12c9db34bce23fcfd932270cb103b1b4397efc8d1: Status 404 returned error can't find the container with id ee150b4b5485a679be2a24b12c9db34bce23fcfd932270cb103b1b4397efc8d1 Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.797898 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7rz2l" Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.933249 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" event={"ID":"729dbe0f-d26d-4eeb-b813-e4be40033e44","Type":"ContainerStarted","Data":"ee150b4b5485a679be2a24b12c9db34bce23fcfd932270cb103b1b4397efc8d1"} Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.945272 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.958077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-webhook-cert\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.958111 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a3f66ba-be42-476c-b03b-6ba6c92acd0f-apiservice-cert\") pod \"metallb-operator-webhook-server-559d6879b9-6w56b\" (UID: \"8a3f66ba-be42-476c-b03b-6ba6c92acd0f\") " pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:51 crc kubenswrapper[4853]: I0127 18:54:51.975031 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 18:54:52 crc kubenswrapper[4853]: I0127 18:54:52.009555 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:52 crc kubenswrapper[4853]: I0127 18:54:52.301748 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b"] Jan 27 18:54:52 crc kubenswrapper[4853]: W0127 18:54:52.308758 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a3f66ba_be42_476c_b03b_6ba6c92acd0f.slice/crio-6dbe6ca99eef91369b6e3d35b15bb38449390a4cbc9533e6b7ec72cfe956357d WatchSource:0}: Error finding container 6dbe6ca99eef91369b6e3d35b15bb38449390a4cbc9533e6b7ec72cfe956357d: Status 404 returned error can't find the container with id 6dbe6ca99eef91369b6e3d35b15bb38449390a4cbc9533e6b7ec72cfe956357d Jan 27 18:54:52 crc kubenswrapper[4853]: I0127 18:54:52.940794 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" event={"ID":"8a3f66ba-be42-476c-b03b-6ba6c92acd0f","Type":"ContainerStarted","Data":"6dbe6ca99eef91369b6e3d35b15bb38449390a4cbc9533e6b7ec72cfe956357d"} Jan 27 18:54:53 crc kubenswrapper[4853]: I0127 18:54:53.951438 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" event={"ID":"729dbe0f-d26d-4eeb-b813-e4be40033e44","Type":"ContainerStarted","Data":"982d7474838035f523e52e6dc6ee8b7671a8ced434f205085e7c8b0def6f4895"} Jan 27 18:54:53 crc kubenswrapper[4853]: I0127 18:54:53.952382 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:54:53 crc kubenswrapper[4853]: I0127 18:54:53.977937 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" podStartSLOduration=1.521506295 podStartE2EDuration="3.977916221s" podCreationTimestamp="2026-01-27 18:54:50 +0000 UTC" firstStartedPulling="2026-01-27 18:54:51.294222543 +0000 UTC m=+733.756765426" lastFinishedPulling="2026-01-27 18:54:53.750632469 +0000 UTC m=+736.213175352" observedRunningTime="2026-01-27 18:54:53.97370407 +0000 UTC m=+736.436246953" watchObservedRunningTime="2026-01-27 18:54:53.977916221 +0000 UTC m=+736.440459114" Jan 27 18:54:57 crc kubenswrapper[4853]: I0127 18:54:57.987896 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" event={"ID":"8a3f66ba-be42-476c-b03b-6ba6c92acd0f","Type":"ContainerStarted","Data":"73c06152b6ea1868e6f7e92342c5dbb4cb543e582a454d339d3aa11072f80a19"} Jan 27 18:54:57 crc kubenswrapper[4853]: I0127 18:54:57.988463 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:54:58 crc kubenswrapper[4853]: I0127 18:54:58.008677 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" podStartSLOduration=2.865536552 podStartE2EDuration="8.008662932s" podCreationTimestamp="2026-01-27 18:54:50 +0000 UTC" firstStartedPulling="2026-01-27 18:54:52.312886044 +0000 UTC m=+734.775428927" lastFinishedPulling="2026-01-27 18:54:57.456012424 +0000 UTC m=+739.918555307" observedRunningTime="2026-01-27 18:54:58.005939163 +0000 UTC m=+740.468482046" watchObservedRunningTime="2026-01-27 18:54:58.008662932 +0000 UTC m=+740.471205815" Jan 27 18:55:05 crc kubenswrapper[4853]: I0127 18:55:05.541884 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:55:05 crc kubenswrapper[4853]: I0127 18:55:05.542532 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:12 crc kubenswrapper[4853]: I0127 18:55:12.015022 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-559d6879b9-6w56b" Jan 27 18:55:12 crc kubenswrapper[4853]: I0127 18:55:12.342831 4853 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:55:30 crc kubenswrapper[4853]: I0127 18:55:30.738386 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57d46b5cf6-rcn4b" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.490701 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4zj9c"] Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.493414 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.496056 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-9xwcq" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.496237 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.496920 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s"] Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.497944 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.500401 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.500426 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.514310 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s"] Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.581920 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-l2pvs"] Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.583661 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.586011 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.586114 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.586272 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.586923 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tckjr" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-conf\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-startup\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595766 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swp5x\" (UniqueName: \"kubernetes.io/projected/a95a2a56-e8a9-418a-95ce-895b555038fa-kube-api-access-swp5x\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595785 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics-certs\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595801 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4w8w\" (UniqueName: \"kubernetes.io/projected/5d610e65-a0f1-4304-a7f9-f8b49e86d372-kube-api-access-g4w8w\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595820 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d610e65-a0f1-4304-a7f9-f8b49e86d372-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.595992 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-sockets\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.596092 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-reloader\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.596156 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.602341 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bkdp4"] Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.603388 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.606357 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.621060 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bkdp4"] Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697180 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-reloader\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697229 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697305 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/67e6561c-3f3b-45dd-b166-ca67a1abd96b-metallb-excludel2\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkm8l\" (UniqueName: \"kubernetes.io/projected/67e6561c-3f3b-45dd-b166-ca67a1abd96b-kube-api-access-xkm8l\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697397 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-conf\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8txx\" (UniqueName: \"kubernetes.io/projected/471ac2ca-b99c-449c-b910-80b44e9a7941-kube-api-access-t8txx\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697455 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/471ac2ca-b99c-449c-b910-80b44e9a7941-metrics-certs\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697480 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-metrics-certs\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697503 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-memberlist\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697578 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-startup\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697792 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swp5x\" (UniqueName: \"kubernetes.io/projected/a95a2a56-e8a9-418a-95ce-895b555038fa-kube-api-access-swp5x\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4w8w\" (UniqueName: \"kubernetes.io/projected/5d610e65-a0f1-4304-a7f9-f8b49e86d372-kube-api-access-g4w8w\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697862 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-conf\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics-certs\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.697960 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d610e65-a0f1-4304-a7f9-f8b49e86d372-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:31 crc kubenswrapper[4853]: E0127 18:55:31.697977 4853 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 18:55:31 crc kubenswrapper[4853]: E0127 18:55:31.698039 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics-certs podName:a95a2a56-e8a9-418a-95ce-895b555038fa nodeName:}" failed. No retries permitted until 2026-01-27 18:55:32.198020723 +0000 UTC m=+774.660563606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics-certs") pod "frr-k8s-4zj9c" (UID: "a95a2a56-e8a9-418a-95ce-895b555038fa") : secret "frr-k8s-certs-secret" not found Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.698059 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/471ac2ca-b99c-449c-b910-80b44e9a7941-cert\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.698137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-sockets\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: E0127 18:55:31.698291 4853 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 18:55:31 crc kubenswrapper[4853]: E0127 18:55:31.698422 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d610e65-a0f1-4304-a7f9-f8b49e86d372-cert podName:5d610e65-a0f1-4304-a7f9-f8b49e86d372 nodeName:}" failed. No retries permitted until 2026-01-27 18:55:32.198390844 +0000 UTC m=+774.660933727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d610e65-a0f1-4304-a7f9-f8b49e86d372-cert") pod "frr-k8s-webhook-server-7df86c4f6c-srh2s" (UID: "5d610e65-a0f1-4304-a7f9-f8b49e86d372") : secret "frr-k8s-webhook-server-cert" not found Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.698485 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-reloader\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.698553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-sockets\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.699219 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a95a2a56-e8a9-418a-95ce-895b555038fa-frr-startup\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.717066 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swp5x\" (UniqueName: \"kubernetes.io/projected/a95a2a56-e8a9-418a-95ce-895b555038fa-kube-api-access-swp5x\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.717983 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4w8w\" (UniqueName: \"kubernetes.io/projected/5d610e65-a0f1-4304-a7f9-f8b49e86d372-kube-api-access-g4w8w\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.799952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/67e6561c-3f3b-45dd-b166-ca67a1abd96b-metallb-excludel2\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.800065 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkm8l\" (UniqueName: \"kubernetes.io/projected/67e6561c-3f3b-45dd-b166-ca67a1abd96b-kube-api-access-xkm8l\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.800105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8txx\" (UniqueName: \"kubernetes.io/projected/471ac2ca-b99c-449c-b910-80b44e9a7941-kube-api-access-t8txx\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.800158 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/471ac2ca-b99c-449c-b910-80b44e9a7941-metrics-certs\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.800184 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-metrics-certs\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.800213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-memberlist\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.800284 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/471ac2ca-b99c-449c-b910-80b44e9a7941-cert\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: E0127 18:55:31.800790 4853 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:55:31 crc kubenswrapper[4853]: E0127 18:55:31.800881 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-memberlist podName:67e6561c-3f3b-45dd-b166-ca67a1abd96b nodeName:}" failed. No retries permitted until 2026-01-27 18:55:32.300855274 +0000 UTC m=+774.763398167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-memberlist") pod "speaker-l2pvs" (UID: "67e6561c-3f3b-45dd-b166-ca67a1abd96b") : secret "metallb-memberlist" not found Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.801082 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/67e6561c-3f3b-45dd-b166-ca67a1abd96b-metallb-excludel2\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.803910 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/471ac2ca-b99c-449c-b910-80b44e9a7941-metrics-certs\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.803945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/471ac2ca-b99c-449c-b910-80b44e9a7941-cert\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.804866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-metrics-certs\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.817329 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkm8l\" (UniqueName: \"kubernetes.io/projected/67e6561c-3f3b-45dd-b166-ca67a1abd96b-kube-api-access-xkm8l\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.820987 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8txx\" (UniqueName: \"kubernetes.io/projected/471ac2ca-b99c-449c-b910-80b44e9a7941-kube-api-access-t8txx\") pod \"controller-6968d8fdc4-bkdp4\" (UID: \"471ac2ca-b99c-449c-b910-80b44e9a7941\") " pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:31 crc kubenswrapper[4853]: I0127 18:55:31.920923 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.111063 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bkdp4"] Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.177224 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bkdp4" event={"ID":"471ac2ca-b99c-449c-b910-80b44e9a7941","Type":"ContainerStarted","Data":"ac62d9c5b50f4292ef5459acf9bf3daa48f6025a2f5ded84fb5d41833fa993c7"} Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.209234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics-certs\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.209553 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d610e65-a0f1-4304-a7f9-f8b49e86d372-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.214571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a95a2a56-e8a9-418a-95ce-895b555038fa-metrics-certs\") pod \"frr-k8s-4zj9c\" (UID: \"a95a2a56-e8a9-418a-95ce-895b555038fa\") " pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.214608 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d610e65-a0f1-4304-a7f9-f8b49e86d372-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-srh2s\" (UID: \"5d610e65-a0f1-4304-a7f9-f8b49e86d372\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.310763 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-memberlist\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.317955 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/67e6561c-3f3b-45dd-b166-ca67a1abd96b-memberlist\") pod \"speaker-l2pvs\" (UID: \"67e6561c-3f3b-45dd-b166-ca67a1abd96b\") " pod="metallb-system/speaker-l2pvs" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.412241 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.420305 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.503172 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l2pvs" Jan 27 18:55:32 crc kubenswrapper[4853]: W0127 18:55:32.519759 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e6561c_3f3b_45dd_b166_ca67a1abd96b.slice/crio-ac515ac932fe2674f2676bd2c9f8f97998067a89e3b95442c45229c3f884ee45 WatchSource:0}: Error finding container ac515ac932fe2674f2676bd2c9f8f97998067a89e3b95442c45229c3f884ee45: Status 404 returned error can't find the container with id ac515ac932fe2674f2676bd2c9f8f97998067a89e3b95442c45229c3f884ee45 Jan 27 18:55:32 crc kubenswrapper[4853]: I0127 18:55:32.813763 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s"] Jan 27 18:55:32 crc kubenswrapper[4853]: W0127 18:55:32.823712 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d610e65_a0f1_4304_a7f9_f8b49e86d372.slice/crio-7616e232b5de16c521f0866e5ac524d43b6375a74bfc43ca000f83ae97acdcea WatchSource:0}: Error finding container 7616e232b5de16c521f0866e5ac524d43b6375a74bfc43ca000f83ae97acdcea: Status 404 returned error can't find the container with id 7616e232b5de16c521f0866e5ac524d43b6375a74bfc43ca000f83ae97acdcea Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.184702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" event={"ID":"5d610e65-a0f1-4304-a7f9-f8b49e86d372","Type":"ContainerStarted","Data":"7616e232b5de16c521f0866e5ac524d43b6375a74bfc43ca000f83ae97acdcea"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.186186 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2pvs" event={"ID":"67e6561c-3f3b-45dd-b166-ca67a1abd96b","Type":"ContainerStarted","Data":"6444db7fab705aca0c83e47073068a6b9f220591ed654059ef213f3d673859bc"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.186233 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2pvs" event={"ID":"67e6561c-3f3b-45dd-b166-ca67a1abd96b","Type":"ContainerStarted","Data":"7f80b1a2b8fce88b1d930d6e19c0f4c809bfadb3a2f46cf4fe81c12a99067d01"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.186245 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l2pvs" event={"ID":"67e6561c-3f3b-45dd-b166-ca67a1abd96b","Type":"ContainerStarted","Data":"ac515ac932fe2674f2676bd2c9f8f97998067a89e3b95442c45229c3f884ee45"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.186423 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-l2pvs" Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.187024 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"a9ad4fc98ffddc95fb69f80ff0ad36b7145a3e388b4021da9acd3f27c2e0447b"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.188326 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bkdp4" event={"ID":"471ac2ca-b99c-449c-b910-80b44e9a7941","Type":"ContainerStarted","Data":"02269961eeb16940b3b74e1d9f53e46a0fe6e26bfafae35d2af3e424bfb6b025"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.188357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bkdp4" event={"ID":"471ac2ca-b99c-449c-b910-80b44e9a7941","Type":"ContainerStarted","Data":"a6522a0e09f72b77090c609a0cbe4d8da179c594d074be6eb2657bbd6193e1c3"} Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.188527 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.202465 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-l2pvs" podStartSLOduration=2.202455758 podStartE2EDuration="2.202455758s" podCreationTimestamp="2026-01-27 18:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:55:33.198633218 +0000 UTC m=+775.661176101" watchObservedRunningTime="2026-01-27 18:55:33.202455758 +0000 UTC m=+775.664998641" Jan 27 18:55:33 crc kubenswrapper[4853]: I0127 18:55:33.218309 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bkdp4" podStartSLOduration=2.218289563 podStartE2EDuration="2.218289563s" podCreationTimestamp="2026-01-27 18:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:55:33.216371028 +0000 UTC m=+775.678913911" watchObservedRunningTime="2026-01-27 18:55:33.218289563 +0000 UTC m=+775.680832446" Jan 27 18:55:35 crc kubenswrapper[4853]: I0127 18:55:35.541716 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:55:35 crc kubenswrapper[4853]: I0127 18:55:35.542052 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:35 crc kubenswrapper[4853]: I0127 18:55:35.542095 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:55:35 crc kubenswrapper[4853]: I0127 18:55:35.542643 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"627fd940f35c1ba5723021e5e015bc2d268e6d0901ac54674b747706a8fc058b"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:55:35 crc kubenswrapper[4853]: I0127 18:55:35.542719 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://627fd940f35c1ba5723021e5e015bc2d268e6d0901ac54674b747706a8fc058b" gracePeriod=600 Jan 27 18:55:36 crc kubenswrapper[4853]: I0127 18:55:36.209028 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="627fd940f35c1ba5723021e5e015bc2d268e6d0901ac54674b747706a8fc058b" exitCode=0 Jan 27 18:55:36 crc kubenswrapper[4853]: I0127 18:55:36.209697 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"627fd940f35c1ba5723021e5e015bc2d268e6d0901ac54674b747706a8fc058b"} Jan 27 18:55:36 crc kubenswrapper[4853]: I0127 18:55:36.209726 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"31e88473416602e1651b8a73df75f161960712c5955c442cb3ea237f2fe7ca04"} Jan 27 18:55:36 crc kubenswrapper[4853]: I0127 18:55:36.209742 4853 scope.go:117] "RemoveContainer" containerID="5b81ace7e2777535cb6c01efce0eddea8127ab44f1a4252fee29729bdae6ce3c" Jan 27 18:55:40 crc kubenswrapper[4853]: I0127 18:55:40.234569 4853 generic.go:334] "Generic (PLEG): container finished" podID="a95a2a56-e8a9-418a-95ce-895b555038fa" containerID="fc9f100411900f5ce7dcc9485f1b992915ad03dbc79a464abd44f372e97b8d79" exitCode=0 Jan 27 18:55:40 crc kubenswrapper[4853]: I0127 18:55:40.234624 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerDied","Data":"fc9f100411900f5ce7dcc9485f1b992915ad03dbc79a464abd44f372e97b8d79"} Jan 27 18:55:40 crc kubenswrapper[4853]: I0127 18:55:40.236771 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" event={"ID":"5d610e65-a0f1-4304-a7f9-f8b49e86d372","Type":"ContainerStarted","Data":"1607fbfdfc4a65edf5aab34107a7df3e43e38cb7afc3594e322b2f583b4259eb"} Jan 27 18:55:40 crc kubenswrapper[4853]: I0127 18:55:40.236961 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:55:40 crc kubenswrapper[4853]: I0127 18:55:40.277921 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" podStartSLOduration=2.592951647 podStartE2EDuration="9.277902562s" podCreationTimestamp="2026-01-27 18:55:31 +0000 UTC" firstStartedPulling="2026-01-27 18:55:32.825727224 +0000 UTC m=+775.288270117" lastFinishedPulling="2026-01-27 18:55:39.510678149 +0000 UTC m=+781.973221032" observedRunningTime="2026-01-27 18:55:40.271419546 +0000 UTC m=+782.733962429" watchObservedRunningTime="2026-01-27 18:55:40.277902562 +0000 UTC m=+782.740445445" Jan 27 18:55:41 crc kubenswrapper[4853]: I0127 18:55:41.245375 4853 generic.go:334] "Generic (PLEG): container finished" podID="a95a2a56-e8a9-418a-95ce-895b555038fa" containerID="72fe1e7e5f5e274d6f525f3d4803f1ec21b40ad3dfd621e64d47522cbf29cda7" exitCode=0 Jan 27 18:55:41 crc kubenswrapper[4853]: I0127 18:55:41.245542 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerDied","Data":"72fe1e7e5f5e274d6f525f3d4803f1ec21b40ad3dfd621e64d47522cbf29cda7"} Jan 27 18:55:42 crc kubenswrapper[4853]: I0127 18:55:42.252142 4853 generic.go:334] "Generic (PLEG): container finished" podID="a95a2a56-e8a9-418a-95ce-895b555038fa" containerID="50326774233c875fb008ea5068e90c09fd26c89a826fdedc0e4cbab06e8e1f32" exitCode=0 Jan 27 18:55:42 crc kubenswrapper[4853]: I0127 18:55:42.252190 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerDied","Data":"50326774233c875fb008ea5068e90c09fd26c89a826fdedc0e4cbab06e8e1f32"} Jan 27 18:55:42 crc kubenswrapper[4853]: I0127 18:55:42.506237 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-l2pvs" Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262468 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"d06e6f785d1ce80c060faeb9619484cae7a15effc7f7eba831d5b5218c423c41"} Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262704 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"8d6f66ad51384d4674d06ef4f148aa3f6c22fab06635b13aed9c882da277d4b7"} Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"78839d4af1718934565fc7e40deb505c7ade8cbb895d281932994f37912a3662"} Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262730 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262740 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"4a484e3c133c3c4f696b90ee4c21ff5767598374790a2cfb9eebdca0f9468987"} Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262749 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"67036918304ecd5d7e22de3a88bc84927b729e50f3591afa67df410f1ff19527"} Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.262760 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4zj9c" event={"ID":"a95a2a56-e8a9-418a-95ce-895b555038fa","Type":"ContainerStarted","Data":"b4266b68159b7f55f487a629f10eb68836e4300051262c5a98c33215edd10155"} Jan 27 18:55:43 crc kubenswrapper[4853]: I0127 18:55:43.290331 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4zj9c" podStartSLOduration=5.292984275 podStartE2EDuration="12.290307631s" podCreationTimestamp="2026-01-27 18:55:31 +0000 UTC" firstStartedPulling="2026-01-27 18:55:32.522191377 +0000 UTC m=+774.984734250" lastFinishedPulling="2026-01-27 18:55:39.519514723 +0000 UTC m=+781.982057606" observedRunningTime="2026-01-27 18:55:43.28573756 +0000 UTC m=+785.748280453" watchObservedRunningTime="2026-01-27 18:55:43.290307631 +0000 UTC m=+785.752850514" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.003550 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fq5pw"] Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.004991 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.006780 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.006789 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tfct7" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.006818 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.013954 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fq5pw"] Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.091921 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9rl\" (UniqueName: \"kubernetes.io/projected/0a06c76b-39d0-4862-ad0d-8037061e0fa2-kube-api-access-jb9rl\") pod \"openstack-operator-index-fq5pw\" (UID: \"0a06c76b-39d0-4862-ad0d-8037061e0fa2\") " pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.193611 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9rl\" (UniqueName: \"kubernetes.io/projected/0a06c76b-39d0-4862-ad0d-8037061e0fa2-kube-api-access-jb9rl\") pod \"openstack-operator-index-fq5pw\" (UID: \"0a06c76b-39d0-4862-ad0d-8037061e0fa2\") " pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.211925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9rl\" (UniqueName: \"kubernetes.io/projected/0a06c76b-39d0-4862-ad0d-8037061e0fa2-kube-api-access-jb9rl\") pod \"openstack-operator-index-fq5pw\" (UID: \"0a06c76b-39d0-4862-ad0d-8037061e0fa2\") " pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.334918 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:46 crc kubenswrapper[4853]: I0127 18:55:46.539089 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fq5pw"] Jan 27 18:55:47 crc kubenswrapper[4853]: I0127 18:55:47.293043 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fq5pw" event={"ID":"0a06c76b-39d0-4862-ad0d-8037061e0fa2","Type":"ContainerStarted","Data":"c4f3eb9e003bf71a533d6f68a97bd41fddd89f60ce9b9330fbd1ed36dd93009b"} Jan 27 18:55:47 crc kubenswrapper[4853]: I0127 18:55:47.413163 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:47 crc kubenswrapper[4853]: I0127 18:55:47.455411 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:49 crc kubenswrapper[4853]: I0127 18:55:49.181657 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fq5pw"] Jan 27 18:55:49 crc kubenswrapper[4853]: I0127 18:55:49.891262 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nc7fr"] Jan 27 18:55:49 crc kubenswrapper[4853]: I0127 18:55:49.895530 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:55:49 crc kubenswrapper[4853]: I0127 18:55:49.901924 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nc7fr"] Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.086018 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwr2z\" (UniqueName: \"kubernetes.io/projected/7f5aa97a-2a3f-4a6d-8e75-521db38570d9-kube-api-access-lwr2z\") pod \"openstack-operator-index-nc7fr\" (UID: \"7f5aa97a-2a3f-4a6d-8e75-521db38570d9\") " pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.187579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwr2z\" (UniqueName: \"kubernetes.io/projected/7f5aa97a-2a3f-4a6d-8e75-521db38570d9-kube-api-access-lwr2z\") pod \"openstack-operator-index-nc7fr\" (UID: \"7f5aa97a-2a3f-4a6d-8e75-521db38570d9\") " pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.207313 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwr2z\" (UniqueName: \"kubernetes.io/projected/7f5aa97a-2a3f-4a6d-8e75-521db38570d9-kube-api-access-lwr2z\") pod \"openstack-operator-index-nc7fr\" (UID: \"7f5aa97a-2a3f-4a6d-8e75-521db38570d9\") " pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.215238 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.591207 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nc7fr"] Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.912585 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc7fr" event={"ID":"7f5aa97a-2a3f-4a6d-8e75-521db38570d9","Type":"ContainerStarted","Data":"db9811ecaa0753c9f3c5df06a84b7469a09156c23ec13e2d89caa8ebdf069f08"} Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.912636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc7fr" event={"ID":"7f5aa97a-2a3f-4a6d-8e75-521db38570d9","Type":"ContainerStarted","Data":"f219eea20e4255fee5ec6d74d883a630a623c318218a74e9887c4d6d040a1919"} Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.914488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fq5pw" event={"ID":"0a06c76b-39d0-4862-ad0d-8037061e0fa2","Type":"ContainerStarted","Data":"4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a"} Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.914620 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fq5pw" podUID="0a06c76b-39d0-4862-ad0d-8037061e0fa2" containerName="registry-server" containerID="cri-o://4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a" gracePeriod=2 Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.931804 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nc7fr" podStartSLOduration=1.883975022 podStartE2EDuration="1.931766726s" podCreationTimestamp="2026-01-27 18:55:49 +0000 UTC" firstStartedPulling="2026-01-27 18:55:50.601651337 +0000 UTC m=+793.064194220" lastFinishedPulling="2026-01-27 18:55:50.649443031 +0000 UTC m=+793.111985924" observedRunningTime="2026-01-27 18:55:50.928477682 +0000 UTC m=+793.391020605" watchObservedRunningTime="2026-01-27 18:55:50.931766726 +0000 UTC m=+793.394309649" Jan 27 18:55:50 crc kubenswrapper[4853]: I0127 18:55:50.948491 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fq5pw" podStartSLOduration=2.559034501 podStartE2EDuration="5.948458736s" podCreationTimestamp="2026-01-27 18:55:45 +0000 UTC" firstStartedPulling="2026-01-27 18:55:46.551076068 +0000 UTC m=+789.013618951" lastFinishedPulling="2026-01-27 18:55:49.940500303 +0000 UTC m=+792.403043186" observedRunningTime="2026-01-27 18:55:50.946825879 +0000 UTC m=+793.409368762" watchObservedRunningTime="2026-01-27 18:55:50.948458736 +0000 UTC m=+793.411001639" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.343111 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.405004 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9rl\" (UniqueName: \"kubernetes.io/projected/0a06c76b-39d0-4862-ad0d-8037061e0fa2-kube-api-access-jb9rl\") pod \"0a06c76b-39d0-4862-ad0d-8037061e0fa2\" (UID: \"0a06c76b-39d0-4862-ad0d-8037061e0fa2\") " Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.410267 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a06c76b-39d0-4862-ad0d-8037061e0fa2-kube-api-access-jb9rl" (OuterVolumeSpecName: "kube-api-access-jb9rl") pod "0a06c76b-39d0-4862-ad0d-8037061e0fa2" (UID: "0a06c76b-39d0-4862-ad0d-8037061e0fa2"). InnerVolumeSpecName "kube-api-access-jb9rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.505914 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9rl\" (UniqueName: \"kubernetes.io/projected/0a06c76b-39d0-4862-ad0d-8037061e0fa2-kube-api-access-jb9rl\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.922417 4853 generic.go:334] "Generic (PLEG): container finished" podID="0a06c76b-39d0-4862-ad0d-8037061e0fa2" containerID="4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a" exitCode=0 Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.922453 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fq5pw" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.922514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fq5pw" event={"ID":"0a06c76b-39d0-4862-ad0d-8037061e0fa2","Type":"ContainerDied","Data":"4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a"} Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.922572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fq5pw" event={"ID":"0a06c76b-39d0-4862-ad0d-8037061e0fa2","Type":"ContainerDied","Data":"c4f3eb9e003bf71a533d6f68a97bd41fddd89f60ce9b9330fbd1ed36dd93009b"} Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.922592 4853 scope.go:117] "RemoveContainer" containerID="4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.924652 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bkdp4" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.944050 4853 scope.go:117] "RemoveContainer" containerID="4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a" Jan 27 18:55:51 crc kubenswrapper[4853]: E0127 18:55:51.944545 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a\": container with ID starting with 4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a not found: ID does not exist" containerID="4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.944587 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a"} err="failed to get container status \"4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a\": rpc error: code = NotFound desc = could not find container \"4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a\": container with ID starting with 4648ebca86236fb9273daa88a4e37066d8ae5a557b2f32d3e455bf1e1535732a not found: ID does not exist" Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.959243 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fq5pw"] Jan 27 18:55:51 crc kubenswrapper[4853]: I0127 18:55:51.964670 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fq5pw"] Jan 27 18:55:52 crc kubenswrapper[4853]: I0127 18:55:52.120274 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a06c76b-39d0-4862-ad0d-8037061e0fa2" path="/var/lib/kubelet/pods/0a06c76b-39d0-4862-ad0d-8037061e0fa2/volumes" Jan 27 18:55:52 crc kubenswrapper[4853]: I0127 18:55:52.416448 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4zj9c" Jan 27 18:55:52 crc kubenswrapper[4853]: I0127 18:55:52.425240 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-srh2s" Jan 27 18:56:00 crc kubenswrapper[4853]: I0127 18:56:00.216091 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:56:00 crc kubenswrapper[4853]: I0127 18:56:00.216836 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:56:00 crc kubenswrapper[4853]: I0127 18:56:00.243982 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:56:01 crc kubenswrapper[4853]: I0127 18:56:01.018156 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nc7fr" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.278029 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd"] Jan 27 18:56:06 crc kubenswrapper[4853]: E0127 18:56:06.278959 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a06c76b-39d0-4862-ad0d-8037061e0fa2" containerName="registry-server" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.278983 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a06c76b-39d0-4862-ad0d-8037061e0fa2" containerName="registry-server" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.279182 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a06c76b-39d0-4862-ad0d-8037061e0fa2" containerName="registry-server" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.280324 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.283802 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-g2wjz" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.292580 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd"] Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.292700 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-util\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.292792 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-bundle\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.292949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwdt\" (UniqueName: \"kubernetes.io/projected/3411d3c3-ab77-45ea-af40-a2708164348e-kube-api-access-9wwdt\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.394259 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-util\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.394505 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-bundle\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.394629 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwdt\" (UniqueName: \"kubernetes.io/projected/3411d3c3-ab77-45ea-af40-a2708164348e-kube-api-access-9wwdt\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.394837 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-util\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.395043 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-bundle\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.412784 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwdt\" (UniqueName: \"kubernetes.io/projected/3411d3c3-ab77-45ea-af40-a2708164348e-kube-api-access-9wwdt\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:06 crc kubenswrapper[4853]: I0127 18:56:06.644656 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:07 crc kubenswrapper[4853]: I0127 18:56:07.052091 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd"] Jan 27 18:56:08 crc kubenswrapper[4853]: I0127 18:56:08.024952 4853 generic.go:334] "Generic (PLEG): container finished" podID="3411d3c3-ab77-45ea-af40-a2708164348e" containerID="e99d25c9c03a3a137467e12064d87273130b50113478e9f3379b7d2754137e63" exitCode=0 Jan 27 18:56:08 crc kubenswrapper[4853]: I0127 18:56:08.025072 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" event={"ID":"3411d3c3-ab77-45ea-af40-a2708164348e","Type":"ContainerDied","Data":"e99d25c9c03a3a137467e12064d87273130b50113478e9f3379b7d2754137e63"} Jan 27 18:56:08 crc kubenswrapper[4853]: I0127 18:56:08.025496 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" event={"ID":"3411d3c3-ab77-45ea-af40-a2708164348e","Type":"ContainerStarted","Data":"41f689318837d4c4e8f423f16ff98e816412cf5da5a2bfe9fcb1e69dcfab0731"} Jan 27 18:56:09 crc kubenswrapper[4853]: I0127 18:56:09.033562 4853 generic.go:334] "Generic (PLEG): container finished" podID="3411d3c3-ab77-45ea-af40-a2708164348e" containerID="5819a86e78602128ab96a7eeeadb3c510808c3a5437ed470b950df81ddcbdefb" exitCode=0 Jan 27 18:56:09 crc kubenswrapper[4853]: I0127 18:56:09.033609 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" event={"ID":"3411d3c3-ab77-45ea-af40-a2708164348e","Type":"ContainerDied","Data":"5819a86e78602128ab96a7eeeadb3c510808c3a5437ed470b950df81ddcbdefb"} Jan 27 18:56:10 crc kubenswrapper[4853]: I0127 18:56:10.048321 4853 generic.go:334] "Generic (PLEG): container finished" podID="3411d3c3-ab77-45ea-af40-a2708164348e" containerID="7d272890d709ee52f57ced720114285dd8d0ed44ddb9e11fef1c0158833949e2" exitCode=0 Jan 27 18:56:10 crc kubenswrapper[4853]: I0127 18:56:10.048393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" event={"ID":"3411d3c3-ab77-45ea-af40-a2708164348e","Type":"ContainerDied","Data":"7d272890d709ee52f57ced720114285dd8d0ed44ddb9e11fef1c0158833949e2"} Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.328698 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.466338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-bundle\") pod \"3411d3c3-ab77-45ea-af40-a2708164348e\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.466415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-util\") pod \"3411d3c3-ab77-45ea-af40-a2708164348e\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.466445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wwdt\" (UniqueName: \"kubernetes.io/projected/3411d3c3-ab77-45ea-af40-a2708164348e-kube-api-access-9wwdt\") pod \"3411d3c3-ab77-45ea-af40-a2708164348e\" (UID: \"3411d3c3-ab77-45ea-af40-a2708164348e\") " Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.467460 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-bundle" (OuterVolumeSpecName: "bundle") pod "3411d3c3-ab77-45ea-af40-a2708164348e" (UID: "3411d3c3-ab77-45ea-af40-a2708164348e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.472689 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3411d3c3-ab77-45ea-af40-a2708164348e-kube-api-access-9wwdt" (OuterVolumeSpecName: "kube-api-access-9wwdt") pod "3411d3c3-ab77-45ea-af40-a2708164348e" (UID: "3411d3c3-ab77-45ea-af40-a2708164348e"). InnerVolumeSpecName "kube-api-access-9wwdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.479501 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-util" (OuterVolumeSpecName: "util") pod "3411d3c3-ab77-45ea-af40-a2708164348e" (UID: "3411d3c3-ab77-45ea-af40-a2708164348e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.567569 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.567612 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3411d3c3-ab77-45ea-af40-a2708164348e-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:56:11 crc kubenswrapper[4853]: I0127 18:56:11.567623 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wwdt\" (UniqueName: \"kubernetes.io/projected/3411d3c3-ab77-45ea-af40-a2708164348e-kube-api-access-9wwdt\") on node \"crc\" DevicePath \"\"" Jan 27 18:56:12 crc kubenswrapper[4853]: I0127 18:56:12.065824 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" event={"ID":"3411d3c3-ab77-45ea-af40-a2708164348e","Type":"ContainerDied","Data":"41f689318837d4c4e8f423f16ff98e816412cf5da5a2bfe9fcb1e69dcfab0731"} Jan 27 18:56:12 crc kubenswrapper[4853]: I0127 18:56:12.066104 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f689318837d4c4e8f423f16ff98e816412cf5da5a2bfe9fcb1e69dcfab0731" Jan 27 18:56:12 crc kubenswrapper[4853]: I0127 18:56:12.065887 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.154390 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj"] Jan 27 18:56:14 crc kubenswrapper[4853]: E0127 18:56:14.154976 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="extract" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.154990 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="extract" Jan 27 18:56:14 crc kubenswrapper[4853]: E0127 18:56:14.155002 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="util" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.155008 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="util" Jan 27 18:56:14 crc kubenswrapper[4853]: E0127 18:56:14.155018 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="pull" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.155025 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="pull" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.155150 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3411d3c3-ab77-45ea-af40-a2708164348e" containerName="extract" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.155650 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.157537 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8gd5q" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.184041 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj"] Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.199830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvjr\" (UniqueName: \"kubernetes.io/projected/fd2257c2-1b25-4d5f-8953-19f01df9c309-kube-api-access-fzvjr\") pod \"openstack-operator-controller-init-67d88b5675-p6llj\" (UID: \"fd2257c2-1b25-4d5f-8953-19f01df9c309\") " pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.300567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvjr\" (UniqueName: \"kubernetes.io/projected/fd2257c2-1b25-4d5f-8953-19f01df9c309-kube-api-access-fzvjr\") pod \"openstack-operator-controller-init-67d88b5675-p6llj\" (UID: \"fd2257c2-1b25-4d5f-8953-19f01df9c309\") " pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.315994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvjr\" (UniqueName: \"kubernetes.io/projected/fd2257c2-1b25-4d5f-8953-19f01df9c309-kube-api-access-fzvjr\") pod \"openstack-operator-controller-init-67d88b5675-p6llj\" (UID: \"fd2257c2-1b25-4d5f-8953-19f01df9c309\") " pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.471558 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:14 crc kubenswrapper[4853]: I0127 18:56:14.669244 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj"] Jan 27 18:56:15 crc kubenswrapper[4853]: I0127 18:56:15.082598 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" event={"ID":"fd2257c2-1b25-4d5f-8953-19f01df9c309","Type":"ContainerStarted","Data":"5a418cd28d0390a83edf5bb688ec50b2b9ac167c0bd0bebfb610df0127233d04"} Jan 27 18:56:19 crc kubenswrapper[4853]: I0127 18:56:19.113268 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" event={"ID":"fd2257c2-1b25-4d5f-8953-19f01df9c309","Type":"ContainerStarted","Data":"7382b5ed06ecaa3ec52171fcccd1256ae45f770cece4612d27e9758a9a68a8b5"} Jan 27 18:56:19 crc kubenswrapper[4853]: I0127 18:56:19.113854 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:19 crc kubenswrapper[4853]: I0127 18:56:19.167229 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" podStartSLOduration=1.5761375640000002 podStartE2EDuration="5.167199105s" podCreationTimestamp="2026-01-27 18:56:14 +0000 UTC" firstStartedPulling="2026-01-27 18:56:14.681014684 +0000 UTC m=+817.143557567" lastFinishedPulling="2026-01-27 18:56:18.272076225 +0000 UTC m=+820.734619108" observedRunningTime="2026-01-27 18:56:19.158142125 +0000 UTC m=+821.620685008" watchObservedRunningTime="2026-01-27 18:56:19.167199105 +0000 UTC m=+821.629742018" Jan 27 18:56:24 crc kubenswrapper[4853]: I0127 18:56:24.475312 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-p6llj" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.369760 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.371130 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.373602 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k4xf2" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.378927 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.383686 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.384519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.390004 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-n5ghk" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.390654 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.391454 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.392656 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-m954m" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.397848 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.415323 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.419099 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwr4\" (UniqueName: \"kubernetes.io/projected/5db9a86f-dff3-4c54-a478-79ce384d78f7-kube-api-access-6mwr4\") pod \"cinder-operator-controller-manager-655bf9cfbb-sj29r\" (UID: \"5db9a86f-dff3-4c54-a478-79ce384d78f7\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.419186 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mm7m\" (UniqueName: \"kubernetes.io/projected/f6e35929-3b14-49b4-9e0e-bbebc88c2ce2-kube-api-access-8mm7m\") pod \"barbican-operator-controller-manager-65ff799cfd-jh7mx\" (UID: \"f6e35929-3b14-49b4-9e0e-bbebc88c2ce2\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.419234 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zbk\" (UniqueName: \"kubernetes.io/projected/0ee7eba6-8efe-4de9-bb26-69c5b47d0312-kube-api-access-56zbk\") pod \"designate-operator-controller-manager-77554cdc5c-mn6nj\" (UID: \"0ee7eba6-8efe-4de9-bb26-69c5b47d0312\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.439775 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.440485 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.444012 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7xcls" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.457501 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.480381 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-bx595"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.495164 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.502338 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fmfmm" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.522247 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zbk\" (UniqueName: \"kubernetes.io/projected/0ee7eba6-8efe-4de9-bb26-69c5b47d0312-kube-api-access-56zbk\") pod \"designate-operator-controller-manager-77554cdc5c-mn6nj\" (UID: \"0ee7eba6-8efe-4de9-bb26-69c5b47d0312\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.522324 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mwr4\" (UniqueName: \"kubernetes.io/projected/5db9a86f-dff3-4c54-a478-79ce384d78f7-kube-api-access-6mwr4\") pod \"cinder-operator-controller-manager-655bf9cfbb-sj29r\" (UID: \"5db9a86f-dff3-4c54-a478-79ce384d78f7\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.522366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mm7m\" (UniqueName: \"kubernetes.io/projected/f6e35929-3b14-49b4-9e0e-bbebc88c2ce2-kube-api-access-8mm7m\") pod \"barbican-operator-controller-manager-65ff799cfd-jh7mx\" (UID: \"f6e35929-3b14-49b4-9e0e-bbebc88c2ce2\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.547190 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-bx595"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.551922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mwr4\" (UniqueName: \"kubernetes.io/projected/5db9a86f-dff3-4c54-a478-79ce384d78f7-kube-api-access-6mwr4\") pod \"cinder-operator-controller-manager-655bf9cfbb-sj29r\" (UID: \"5db9a86f-dff3-4c54-a478-79ce384d78f7\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.552032 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zbk\" (UniqueName: \"kubernetes.io/projected/0ee7eba6-8efe-4de9-bb26-69c5b47d0312-kube-api-access-56zbk\") pod \"designate-operator-controller-manager-77554cdc5c-mn6nj\" (UID: \"0ee7eba6-8efe-4de9-bb26-69c5b47d0312\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.552370 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mm7m\" (UniqueName: \"kubernetes.io/projected/f6e35929-3b14-49b4-9e0e-bbebc88c2ce2-kube-api-access-8mm7m\") pod \"barbican-operator-controller-manager-65ff799cfd-jh7mx\" (UID: \"f6e35929-3b14-49b4-9e0e-bbebc88c2ce2\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.555965 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.556833 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.559035 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wtndw" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.564519 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.594656 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.595508 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.605785 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.606025 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-grvmx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.616945 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.617765 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.621074 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-g8tmb" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.626858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bc8t\" (UniqueName: \"kubernetes.io/projected/7b18ea7d-8f47-450b-aa4b-0b75fc0c0581-kube-api-access-9bc8t\") pod \"glance-operator-controller-manager-67dd55ff59-n89p8\" (UID: \"7b18ea7d-8f47-450b-aa4b-0b75fc0c0581\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.626911 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2bz4\" (UniqueName: \"kubernetes.io/projected/89a3cd80-89b0-41f9-a469-ef001d9be747-kube-api-access-w2bz4\") pod \"heat-operator-controller-manager-575ffb885b-bx595\" (UID: \"89a3cd80-89b0-41f9-a469-ef001d9be747\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.626934 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lbtv\" (UniqueName: \"kubernetes.io/projected/01b08d09-41bb-4a7a-9af2-7fe597572169-kube-api-access-6lbtv\") pod \"horizon-operator-controller-manager-77d5c5b54f-4pmv9\" (UID: \"01b08d09-41bb-4a7a-9af2-7fe597572169\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.626975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hjz\" (UniqueName: \"kubernetes.io/projected/e279285c-c536-46b4-b133-7c23811a725a-kube-api-access-z5hjz\") pod \"ironic-operator-controller-manager-768b776ffb-dj2lw\" (UID: \"e279285c-c536-46b4-b133-7c23811a725a\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.644187 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.653722 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.663465 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.664401 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.675327 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.677064 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8lb8k" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.688436 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.693170 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.693906 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.695881 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pv2ct" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.703168 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.711836 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.717093 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.723256 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.724067 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.729637 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-97jk2" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.730478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bc8t\" (UniqueName: \"kubernetes.io/projected/7b18ea7d-8f47-450b-aa4b-0b75fc0c0581-kube-api-access-9bc8t\") pod \"glance-operator-controller-manager-67dd55ff59-n89p8\" (UID: \"7b18ea7d-8f47-450b-aa4b-0b75fc0c0581\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.730514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2bz4\" (UniqueName: \"kubernetes.io/projected/89a3cd80-89b0-41f9-a469-ef001d9be747-kube-api-access-w2bz4\") pod \"heat-operator-controller-manager-575ffb885b-bx595\" (UID: \"89a3cd80-89b0-41f9-a469-ef001d9be747\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.730535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lbtv\" (UniqueName: \"kubernetes.io/projected/01b08d09-41bb-4a7a-9af2-7fe597572169-kube-api-access-6lbtv\") pod \"horizon-operator-controller-manager-77d5c5b54f-4pmv9\" (UID: \"01b08d09-41bb-4a7a-9af2-7fe597572169\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.730562 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.730594 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5hjz\" (UniqueName: \"kubernetes.io/projected/e279285c-c536-46b4-b133-7c23811a725a-kube-api-access-z5hjz\") pod \"ironic-operator-controller-manager-768b776ffb-dj2lw\" (UID: \"e279285c-c536-46b4-b133-7c23811a725a\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.730611 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfdkz\" (UniqueName: \"kubernetes.io/projected/613d8e60-1314-45a2-8bcc-250151f708d1-kube-api-access-nfdkz\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.750575 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.783739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bc8t\" (UniqueName: \"kubernetes.io/projected/7b18ea7d-8f47-450b-aa4b-0b75fc0c0581-kube-api-access-9bc8t\") pod \"glance-operator-controller-manager-67dd55ff59-n89p8\" (UID: \"7b18ea7d-8f47-450b-aa4b-0b75fc0c0581\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.792951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lbtv\" (UniqueName: \"kubernetes.io/projected/01b08d09-41bb-4a7a-9af2-7fe597572169-kube-api-access-6lbtv\") pod \"horizon-operator-controller-manager-77d5c5b54f-4pmv9\" (UID: \"01b08d09-41bb-4a7a-9af2-7fe597572169\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.797738 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2bz4\" (UniqueName: \"kubernetes.io/projected/89a3cd80-89b0-41f9-a469-ef001d9be747-kube-api-access-w2bz4\") pod \"heat-operator-controller-manager-575ffb885b-bx595\" (UID: \"89a3cd80-89b0-41f9-a469-ef001d9be747\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.798425 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5hjz\" (UniqueName: \"kubernetes.io/projected/e279285c-c536-46b4-b133-7c23811a725a-kube-api-access-z5hjz\") pod \"ironic-operator-controller-manager-768b776ffb-dj2lw\" (UID: \"e279285c-c536-46b4-b133-7c23811a725a\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.811183 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.811985 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.812406 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.820685 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5xb5j" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.832900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwkx\" (UniqueName: \"kubernetes.io/projected/bee4ca26-dd1a-4747-8bf3-f152d8236270-kube-api-access-9dwkx\") pod \"keystone-operator-controller-manager-55f684fd56-flf9f\" (UID: \"bee4ca26-dd1a-4747-8bf3-f152d8236270\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.837532 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.838378 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.844101 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.852520 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-pc26s" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.857329 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.858183 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.864509 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qrxnl" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.870443 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6wjj\" (UniqueName: \"kubernetes.io/projected/0a29796d-a7c3-480a-8379-4d4e7731d5b3-kube-api-access-f6wjj\") pod \"manila-operator-controller-manager-849fcfbb6b-dbzp2\" (UID: \"0a29796d-a7c3-480a-8379-4d4e7731d5b3\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.870495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dc4h\" (UniqueName: \"kubernetes.io/projected/a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43-kube-api-access-6dc4h\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn\" (UID: \"a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.870511 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwl5\" (UniqueName: \"kubernetes.io/projected/ace486ae-a8c2-4aca-8719-528ecbed879f-kube-api-access-gxwl5\") pod \"neutron-operator-controller-manager-7ffd8d76d4-gvm5r\" (UID: \"ace486ae-a8c2-4aca-8719-528ecbed879f\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.870580 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.870633 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfdkz\" (UniqueName: \"kubernetes.io/projected/613d8e60-1314-45a2-8bcc-250151f708d1-kube-api-access-nfdkz\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:42 crc kubenswrapper[4853]: E0127 18:56:42.870999 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:42 crc kubenswrapper[4853]: E0127 18:56:42.871041 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert podName:613d8e60-1314-45a2-8bcc-250151f708d1 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:43.371026871 +0000 UTC m=+845.833569754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert") pod "infra-operator-controller-manager-7d75bc88d5-qrs25" (UID: "613d8e60-1314-45a2-8bcc-250151f708d1") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.919436 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.920818 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.949647 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.960825 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.972547 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwkx\" (UniqueName: \"kubernetes.io/projected/bee4ca26-dd1a-4747-8bf3-f152d8236270-kube-api-access-9dwkx\") pod \"keystone-operator-controller-manager-55f684fd56-flf9f\" (UID: \"bee4ca26-dd1a-4747-8bf3-f152d8236270\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.972662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6wjj\" (UniqueName: \"kubernetes.io/projected/0a29796d-a7c3-480a-8379-4d4e7731d5b3-kube-api-access-f6wjj\") pod \"manila-operator-controller-manager-849fcfbb6b-dbzp2\" (UID: \"0a29796d-a7c3-480a-8379-4d4e7731d5b3\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.972712 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dc4h\" (UniqueName: \"kubernetes.io/projected/a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43-kube-api-access-6dc4h\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn\" (UID: \"a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.972738 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwl5\" (UniqueName: \"kubernetes.io/projected/ace486ae-a8c2-4aca-8719-528ecbed879f-kube-api-access-gxwl5\") pod \"neutron-operator-controller-manager-7ffd8d76d4-gvm5r\" (UID: \"ace486ae-a8c2-4aca-8719-528ecbed879f\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.973193 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992dg\" (UniqueName: \"kubernetes.io/projected/85e5832e-902f-4f65-b659-60abf5d14654-kube-api-access-992dg\") pod \"octavia-operator-controller-manager-7875d7675-cq4p4\" (UID: \"85e5832e-902f-4f65-b659-60abf5d14654\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.973297 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbnk4\" (UniqueName: \"kubernetes.io/projected/d9757c33-a50c-4fa4-ab8d-270c2bed1459-kube-api-access-qbnk4\") pod \"nova-operator-controller-manager-ddcbfd695-mrb2s\" (UID: \"d9757c33-a50c-4fa4-ab8d-270c2bed1459\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.976202 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx"] Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.979616 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.989130 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfdkz\" (UniqueName: \"kubernetes.io/projected/613d8e60-1314-45a2-8bcc-250151f708d1-kube-api-access-nfdkz\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.997541 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 18:56:42 crc kubenswrapper[4853]: I0127 18:56:42.997778 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jjfcv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.004401 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.014696 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.011968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwl5\" (UniqueName: \"kubernetes.io/projected/ace486ae-a8c2-4aca-8719-528ecbed879f-kube-api-access-gxwl5\") pod \"neutron-operator-controller-manager-7ffd8d76d4-gvm5r\" (UID: \"ace486ae-a8c2-4aca-8719-528ecbed879f\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.018155 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.019752 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-86gmf" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.030484 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.040689 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6wjj\" (UniqueName: \"kubernetes.io/projected/0a29796d-a7c3-480a-8379-4d4e7731d5b3-kube-api-access-f6wjj\") pod \"manila-operator-controller-manager-849fcfbb6b-dbzp2\" (UID: \"0a29796d-a7c3-480a-8379-4d4e7731d5b3\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.041890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwkx\" (UniqueName: \"kubernetes.io/projected/bee4ca26-dd1a-4747-8bf3-f152d8236270-kube-api-access-9dwkx\") pod \"keystone-operator-controller-manager-55f684fd56-flf9f\" (UID: \"bee4ca26-dd1a-4747-8bf3-f152d8236270\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.046166 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.046890 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.070780 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.083821 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5pg58" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.084890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dc4h\" (UniqueName: \"kubernetes.io/projected/a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43-kube-api-access-6dc4h\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn\" (UID: \"a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.084992 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.092352 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992dg\" (UniqueName: \"kubernetes.io/projected/85e5832e-902f-4f65-b659-60abf5d14654-kube-api-access-992dg\") pod \"octavia-operator-controller-manager-7875d7675-cq4p4\" (UID: \"85e5832e-902f-4f65-b659-60abf5d14654\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.092424 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkwc\" (UniqueName: \"kubernetes.io/projected/f593e788-ce4a-47ad-a08c-96e1ec0cc92c-kube-api-access-vvkwc\") pod \"ovn-operator-controller-manager-6f75f45d54-qgstv\" (UID: \"f593e788-ce4a-47ad-a08c-96e1ec0cc92c\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.092537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbnk4\" (UniqueName: \"kubernetes.io/projected/d9757c33-a50c-4fa4-ab8d-270c2bed1459-kube-api-access-qbnk4\") pod \"nova-operator-controller-manager-ddcbfd695-mrb2s\" (UID: \"d9757c33-a50c-4fa4-ab8d-270c2bed1459\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.092644 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct57m\" (UniqueName: \"kubernetes.io/projected/9bd5a06a-f084-42ba-8f88-9be1cee0554a-kube-api-access-ct57m\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.092699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.138745 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992dg\" (UniqueName: \"kubernetes.io/projected/85e5832e-902f-4f65-b659-60abf5d14654-kube-api-access-992dg\") pod \"octavia-operator-controller-manager-7875d7675-cq4p4\" (UID: \"85e5832e-902f-4f65-b659-60abf5d14654\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.147977 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.151042 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.147991 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbnk4\" (UniqueName: \"kubernetes.io/projected/d9757c33-a50c-4fa4-ab8d-270c2bed1459-kube-api-access-qbnk4\") pod \"nova-operator-controller-manager-ddcbfd695-mrb2s\" (UID: \"d9757c33-a50c-4fa4-ab8d-270c2bed1459\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.157253 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xcpxb" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.157863 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.166962 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.196154 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn499\" (UniqueName: \"kubernetes.io/projected/7d1a71be-07cb-43e0-8584-75e5c48f4175-kube-api-access-kn499\") pod \"placement-operator-controller-manager-79d5ccc684-gl44q\" (UID: \"7d1a71be-07cb-43e0-8584-75e5c48f4175\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.197092 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct57m\" (UniqueName: \"kubernetes.io/projected/9bd5a06a-f084-42ba-8f88-9be1cee0554a-kube-api-access-ct57m\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.197213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.197356 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxktg\" (UniqueName: \"kubernetes.io/projected/e32b4f39-5c23-4e91-92bc-ffd6b7694a5a-kube-api-access-zxktg\") pod \"telemetry-operator-controller-manager-799bc87c89-mzmbv\" (UID: \"e32b4f39-5c23-4e91-92bc-ffd6b7694a5a\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.197438 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkwc\" (UniqueName: \"kubernetes.io/projected/f593e788-ce4a-47ad-a08c-96e1ec0cc92c-kube-api-access-vvkwc\") pod \"ovn-operator-controller-manager-6f75f45d54-qgstv\" (UID: \"f593e788-ce4a-47ad-a08c-96e1ec0cc92c\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.197954 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.198059 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert podName:9bd5a06a-f084-42ba-8f88-9be1cee0554a nodeName:}" failed. No retries permitted until 2026-01-27 18:56:43.698045376 +0000 UTC m=+846.160588259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" (UID: "9bd5a06a-f084-42ba-8f88-9be1cee0554a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.208972 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.209820 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.210353 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.214473 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rxcld" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.225871 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct57m\" (UniqueName: \"kubernetes.io/projected/9bd5a06a-f084-42ba-8f88-9be1cee0554a-kube-api-access-ct57m\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.225930 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.238868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkwc\" (UniqueName: \"kubernetes.io/projected/f593e788-ce4a-47ad-a08c-96e1ec0cc92c-kube-api-access-vvkwc\") pod \"ovn-operator-controller-manager-6f75f45d54-qgstv\" (UID: \"f593e788-ce4a-47ad-a08c-96e1ec0cc92c\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.239106 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.239991 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.241327 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.241905 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-db2rl" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.250069 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.266467 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.279159 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.282779 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.284148 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.287733 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.301296 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdrc7\" (UniqueName: \"kubernetes.io/projected/8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5-kube-api-access-zdrc7\") pod \"test-operator-controller-manager-69797bbcbd-cpsgc\" (UID: \"8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.301347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxcx\" (UniqueName: \"kubernetes.io/projected/5b33f408-e905-4298-adfc-b113f89ecd36-kube-api-access-kgxcx\") pod \"swift-operator-controller-manager-547cbdb99f-bn7wr\" (UID: \"5b33f408-e905-4298-adfc-b113f89ecd36\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.301377 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxktg\" (UniqueName: \"kubernetes.io/projected/e32b4f39-5c23-4e91-92bc-ffd6b7694a5a-kube-api-access-zxktg\") pod \"telemetry-operator-controller-manager-799bc87c89-mzmbv\" (UID: \"e32b4f39-5c23-4e91-92bc-ffd6b7694a5a\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.301452 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn499\" (UniqueName: \"kubernetes.io/projected/7d1a71be-07cb-43e0-8584-75e5c48f4175-kube-api-access-kn499\") pod \"placement-operator-controller-manager-79d5ccc684-gl44q\" (UID: \"7d1a71be-07cb-43e0-8584-75e5c48f4175\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.313300 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.314605 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-484qr" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.324593 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.325806 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.330376 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxktg\" (UniqueName: \"kubernetes.io/projected/e32b4f39-5c23-4e91-92bc-ffd6b7694a5a-kube-api-access-zxktg\") pod \"telemetry-operator-controller-manager-799bc87c89-mzmbv\" (UID: \"e32b4f39-5c23-4e91-92bc-ffd6b7694a5a\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.331136 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.331291 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wjxtz" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.331399 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.334625 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.336156 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn499\" (UniqueName: \"kubernetes.io/projected/7d1a71be-07cb-43e0-8584-75e5c48f4175-kube-api-access-kn499\") pod \"placement-operator-controller-manager-79d5ccc684-gl44q\" (UID: \"7d1a71be-07cb-43e0-8584-75e5c48f4175\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.366184 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.367178 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.370909 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xg4ck" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.371521 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402619 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402670 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmj5b\" (UniqueName: \"kubernetes.io/projected/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-kube-api-access-wmj5b\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402720 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdrc7\" (UniqueName: \"kubernetes.io/projected/8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5-kube-api-access-zdrc7\") pod \"test-operator-controller-manager-69797bbcbd-cpsgc\" (UID: \"8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402769 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxcx\" (UniqueName: \"kubernetes.io/projected/5b33f408-e905-4298-adfc-b113f89ecd36-kube-api-access-kgxcx\") pod \"swift-operator-controller-manager-547cbdb99f-bn7wr\" (UID: \"5b33f408-e905-4298-adfc-b113f89ecd36\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402838 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402894 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.402956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6f66\" (UniqueName: \"kubernetes.io/projected/aacb2032-25f3-4faf-a0ca-f980411b4ae2-kube-api-access-w6f66\") pod \"watcher-operator-controller-manager-767b8bc766-d4dcp\" (UID: \"aacb2032-25f3-4faf-a0ca-f980411b4ae2\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.403024 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.403095 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert podName:613d8e60-1314-45a2-8bcc-250151f708d1 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:44.403073326 +0000 UTC m=+846.865616209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert") pod "infra-operator-controller-manager-7d75bc88d5-qrs25" (UID: "613d8e60-1314-45a2-8bcc-250151f708d1") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.414334 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.425579 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdrc7\" (UniqueName: \"kubernetes.io/projected/8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5-kube-api-access-zdrc7\") pod \"test-operator-controller-manager-69797bbcbd-cpsgc\" (UID: \"8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.429516 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.430000 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxcx\" (UniqueName: \"kubernetes.io/projected/5b33f408-e905-4298-adfc-b113f89ecd36-kube-api-access-kgxcx\") pod \"swift-operator-controller-manager-547cbdb99f-bn7wr\" (UID: \"5b33f408-e905-4298-adfc-b113f89ecd36\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.507056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brx4r\" (UniqueName: \"kubernetes.io/projected/98c9ef8d-ccf0-4c4e-83f3-53451532f0ad-kube-api-access-brx4r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2ls6m\" (UID: \"98c9ef8d-ccf0-4c4e-83f3-53451532f0ad\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.507136 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6f66\" (UniqueName: \"kubernetes.io/projected/aacb2032-25f3-4faf-a0ca-f980411b4ae2-kube-api-access-w6f66\") pod \"watcher-operator-controller-manager-767b8bc766-d4dcp\" (UID: \"aacb2032-25f3-4faf-a0ca-f980411b4ae2\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.507173 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.507199 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmj5b\" (UniqueName: \"kubernetes.io/projected/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-kube-api-access-wmj5b\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.507317 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.507463 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.507519 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:44.007501245 +0000 UTC m=+846.470044128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "metrics-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.507685 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.507716 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:44.007707921 +0000 UTC m=+846.470250804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.524252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmj5b\" (UniqueName: \"kubernetes.io/projected/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-kube-api-access-wmj5b\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.532193 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6f66\" (UniqueName: \"kubernetes.io/projected/aacb2032-25f3-4faf-a0ca-f980411b4ae2-kube-api-access-w6f66\") pod \"watcher-operator-controller-manager-767b8bc766-d4dcp\" (UID: \"aacb2032-25f3-4faf-a0ca-f980411b4ae2\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.583268 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.589164 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.608601 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brx4r\" (UniqueName: \"kubernetes.io/projected/98c9ef8d-ccf0-4c4e-83f3-53451532f0ad-kube-api-access-brx4r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2ls6m\" (UID: \"98c9ef8d-ccf0-4c4e-83f3-53451532f0ad\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.630770 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.633858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brx4r\" (UniqueName: \"kubernetes.io/projected/98c9ef8d-ccf0-4c4e-83f3-53451532f0ad-kube-api-access-brx4r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2ls6m\" (UID: \"98c9ef8d-ccf0-4c4e-83f3-53451532f0ad\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.698589 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.709569 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.709773 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: E0127 18:56:43.709847 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert podName:9bd5a06a-f084-42ba-8f88-9be1cee0554a nodeName:}" failed. No retries permitted until 2026-01-27 18:56:44.709827478 +0000 UTC m=+847.172370361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" (UID: "9bd5a06a-f084-42ba-8f88-9be1cee0554a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.724415 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.779812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:56:43 crc kubenswrapper[4853]: W0127 18:56:43.787333 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e35929_3b14_49b4_9e0e_bbebc88c2ce2.slice/crio-8ace3038832b3e8809da23761d9f3f415a506958a27798b470b6641416343e23 WatchSource:0}: Error finding container 8ace3038832b3e8809da23761d9f3f415a506958a27798b470b6641416343e23: Status 404 returned error can't find the container with id 8ace3038832b3e8809da23761d9f3f415a506958a27798b470b6641416343e23 Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.807975 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.857207 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj"] Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.890017 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw"] Jan 27 18:56:43 crc kubenswrapper[4853]: W0127 18:56:43.904639 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode279285c_c536_46b4_b133_7c23811a725a.slice/crio-a54285ae2e70f589adc26577761ff00ba2ca9e11887c91964aa7dc1f120fe539 WatchSource:0}: Error finding container a54285ae2e70f589adc26577761ff00ba2ca9e11887c91964aa7dc1f120fe539: Status 404 returned error can't find the container with id a54285ae2e70f589adc26577761ff00ba2ca9e11887c91964aa7dc1f120fe539 Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.906750 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-bx595"] Jan 27 18:56:43 crc kubenswrapper[4853]: W0127 18:56:43.916570 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a3cd80_89b0_41f9_a469_ef001d9be747.slice/crio-641d02a1a5a9895a9881fbaf230224eaaa99b6eaa6d3a00ff893f46cd8b2a8c2 WatchSource:0}: Error finding container 641d02a1a5a9895a9881fbaf230224eaaa99b6eaa6d3a00ff893f46cd8b2a8c2: Status 404 returned error can't find the container with id 641d02a1a5a9895a9881fbaf230224eaaa99b6eaa6d3a00ff893f46cd8b2a8c2 Jan 27 18:56:43 crc kubenswrapper[4853]: I0127 18:56:43.917881 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.015474 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.015565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.015687 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.015772 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:45.015752726 +0000 UTC m=+847.478295619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "metrics-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.015697 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.015818 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:45.015810778 +0000 UTC m=+847.478353771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "webhook-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: W0127 18:56:44.161798 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85e5832e_902f_4f65_b659_60abf5d14654.slice/crio-3d0a4622e9a1d2a36c13c614c5e1233d698964dbccdb761f829bce6628093f2a WatchSource:0}: Error finding container 3d0a4622e9a1d2a36c13c614c5e1233d698964dbccdb761f829bce6628093f2a: Status 404 returned error can't find the container with id 3d0a4622e9a1d2a36c13c614c5e1233d698964dbccdb761f829bce6628093f2a Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.167167 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.176891 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.186276 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.191434 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.195942 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.279977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" event={"ID":"e279285c-c536-46b4-b133-7c23811a725a","Type":"ContainerStarted","Data":"a54285ae2e70f589adc26577761ff00ba2ca9e11887c91964aa7dc1f120fe539"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.281349 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" event={"ID":"0ee7eba6-8efe-4de9-bb26-69c5b47d0312","Type":"ContainerStarted","Data":"11ed4c573b38cc246100d1511fad936e251a0f1d39c9baf1903b05fa909c4439"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.284322 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" event={"ID":"85e5832e-902f-4f65-b659-60abf5d14654","Type":"ContainerStarted","Data":"3d0a4622e9a1d2a36c13c614c5e1233d698964dbccdb761f829bce6628093f2a"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.289219 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" event={"ID":"ace486ae-a8c2-4aca-8719-528ecbed879f","Type":"ContainerStarted","Data":"11cfcc5123d61beccc1c5bc4bc788c27ccdb2561a3ca03587aa20874647e2bc7"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.291695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" event={"ID":"7d1a71be-07cb-43e0-8584-75e5c48f4175","Type":"ContainerStarted","Data":"c7e57cddb9f374fd4668936fba8dc54c0ccb84a626fc5626bf36f3b9acbd7cdc"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.292745 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" event={"ID":"01b08d09-41bb-4a7a-9af2-7fe597572169","Type":"ContainerStarted","Data":"64ae7799585fa2ca1b3ba3fcadae426345dc5da52df5bcf74cc6faaf733979aa"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.293603 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" event={"ID":"5db9a86f-dff3-4c54-a478-79ce384d78f7","Type":"ContainerStarted","Data":"06eeec3e38c97033ce6789f251807f22605099591c4bc15be589ac4833364ed3"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.294680 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" event={"ID":"f6e35929-3b14-49b4-9e0e-bbebc88c2ce2","Type":"ContainerStarted","Data":"8ace3038832b3e8809da23761d9f3f415a506958a27798b470b6641416343e23"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.295824 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" event={"ID":"7b18ea7d-8f47-450b-aa4b-0b75fc0c0581","Type":"ContainerStarted","Data":"2cd1845cbcf8577fd37561d7bf8e927f16a63a7cc650f528e249848b65b14dc0"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.296760 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" event={"ID":"0a29796d-a7c3-480a-8379-4d4e7731d5b3","Type":"ContainerStarted","Data":"c7b9ca09fd185ec7d1c5ae7d179617d06aac1076987678094b9a6d26da871a76"} Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.298244 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.298384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" event={"ID":"89a3cd80-89b0-41f9-a469-ef001d9be747","Type":"ContainerStarted","Data":"641d02a1a5a9895a9881fbaf230224eaaa99b6eaa6d3a00ff893f46cd8b2a8c2"} Jan 27 18:56:44 crc kubenswrapper[4853]: W0127 18:56:44.300170 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf593e788_ce4a_47ad_a08c_96e1ec0cc92c.slice/crio-06d16aff2fc7dd42b4d64b1078100212fe06aeb66d59b1e946254d7f9cb4d0a5 WatchSource:0}: Error finding container 06d16aff2fc7dd42b4d64b1078100212fe06aeb66d59b1e946254d7f9cb4d0a5: Status 404 returned error can't find the container with id 06d16aff2fc7dd42b4d64b1078100212fe06aeb66d59b1e946254d7f9cb4d0a5 Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.420274 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.420471 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.420517 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert podName:613d8e60-1314-45a2-8bcc-250151f708d1 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:46.420502515 +0000 UTC m=+848.883045398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert") pod "infra-operator-controller-manager-7d75bc88d5-qrs25" (UID: "613d8e60-1314-45a2-8bcc-250151f708d1") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.631446 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.653752 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.660752 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.669398 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv"] Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.669510 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxktg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-mzmbv_openstack-operators(e32b4f39-5c23-4e91-92bc-ffd6b7694a5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.670198 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdrc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-cpsgc_openstack-operators(8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.671640 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" podUID="8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.672141 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" podUID="e32b4f39-5c23-4e91-92bc-ffd6b7694a5a" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.672950 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6f66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-d4dcp_openstack-operators(aacb2032-25f3-4faf-a0ca-f980411b4ae2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.674200 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" podUID="aacb2032-25f3-4faf-a0ca-f980411b4ae2" Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.675628 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.682176 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc"] Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.694178 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp"] Jan 27 18:56:44 crc kubenswrapper[4853]: W0127 18:56:44.697590 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98c9ef8d_ccf0_4c4e_83f3_53451532f0ad.slice/crio-872aaa49f2bffbfe9e8cfa8a1b138b7376405631c5a78e1f6ec9f3fc7c7abccb WatchSource:0}: Error finding container 872aaa49f2bffbfe9e8cfa8a1b138b7376405631c5a78e1f6ec9f3fc7c7abccb: Status 404 returned error can't find the container with id 872aaa49f2bffbfe9e8cfa8a1b138b7376405631c5a78e1f6ec9f3fc7c7abccb Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.699017 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr"] Jan 27 18:56:44 crc kubenswrapper[4853]: W0127 18:56:44.699291 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9757c33_a50c_4fa4_ab8d_270c2bed1459.slice/crio-689e3fccf45a52b7372ace1c117147bd3153cc6ca6e5f476d1e24532d7ed72f5 WatchSource:0}: Error finding container 689e3fccf45a52b7372ace1c117147bd3153cc6ca6e5f476d1e24532d7ed72f5: Status 404 returned error can't find the container with id 689e3fccf45a52b7372ace1c117147bd3153cc6ca6e5f476d1e24532d7ed72f5 Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.701679 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbnk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-mrb2s_openstack-operators(d9757c33-a50c-4fa4-ab8d-270c2bed1459): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:56:44 crc kubenswrapper[4853]: W0127 18:56:44.702254 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b33f408_e905_4298_adfc_b113f89ecd36.slice/crio-a8ddac393cbefd5032d4c4d75f69966fe6ab5bfe5e23212fd1956889f95a524a WatchSource:0}: Error finding container a8ddac393cbefd5032d4c4d75f69966fe6ab5bfe5e23212fd1956889f95a524a: Status 404 returned error can't find the container with id a8ddac393cbefd5032d4c4d75f69966fe6ab5bfe5e23212fd1956889f95a524a Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.703608 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" podUID="d9757c33-a50c-4fa4-ab8d-270c2bed1459" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.705812 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgxcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-bn7wr_openstack-operators(5b33f408-e905-4298-adfc-b113f89ecd36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.706959 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" podUID="5b33f408-e905-4298-adfc-b113f89ecd36" Jan 27 18:56:44 crc kubenswrapper[4853]: I0127 18:56:44.724910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.725061 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:44 crc kubenswrapper[4853]: E0127 18:56:44.725137 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert podName:9bd5a06a-f084-42ba-8f88-9be1cee0554a nodeName:}" failed. No retries permitted until 2026-01-27 18:56:46.725107706 +0000 UTC m=+849.187650589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" (UID: "9bd5a06a-f084-42ba-8f88-9be1cee0554a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.029581 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.029663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.029794 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.029819 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.029881 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:47.029856711 +0000 UTC m=+849.492399594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "metrics-server-cert" not found Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.029906 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:47.029896342 +0000 UTC m=+849.492439445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "webhook-server-cert" not found Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.306324 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" event={"ID":"bee4ca26-dd1a-4747-8bf3-f152d8236270","Type":"ContainerStarted","Data":"d2fa5742f4a65b525c96090a3b8a18718d698d4174c3e4d9421dd1e361ef9c3c"} Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.309994 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" event={"ID":"a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43","Type":"ContainerStarted","Data":"ada2b81d827cd333c55199eba1554dd32eddb1c721797f18af342645ec0dee57"} Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.312805 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" event={"ID":"f593e788-ce4a-47ad-a08c-96e1ec0cc92c","Type":"ContainerStarted","Data":"06d16aff2fc7dd42b4d64b1078100212fe06aeb66d59b1e946254d7f9cb4d0a5"} Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.315440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" event={"ID":"d9757c33-a50c-4fa4-ab8d-270c2bed1459","Type":"ContainerStarted","Data":"689e3fccf45a52b7372ace1c117147bd3153cc6ca6e5f476d1e24532d7ed72f5"} Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.317036 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" event={"ID":"98c9ef8d-ccf0-4c4e-83f3-53451532f0ad","Type":"ContainerStarted","Data":"872aaa49f2bffbfe9e8cfa8a1b138b7376405631c5a78e1f6ec9f3fc7c7abccb"} Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.318491 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" podUID="d9757c33-a50c-4fa4-ab8d-270c2bed1459" Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.321515 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" event={"ID":"e32b4f39-5c23-4e91-92bc-ffd6b7694a5a","Type":"ContainerStarted","Data":"075081f540854222f7da7b884354c29cd2bcdce328324a5e53063ebc9fb33039"} Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.323419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" podUID="e32b4f39-5c23-4e91-92bc-ffd6b7694a5a" Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.323881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" event={"ID":"5b33f408-e905-4298-adfc-b113f89ecd36","Type":"ContainerStarted","Data":"a8ddac393cbefd5032d4c4d75f69966fe6ab5bfe5e23212fd1956889f95a524a"} Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.325611 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" podUID="5b33f408-e905-4298-adfc-b113f89ecd36" Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.326775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" event={"ID":"8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5","Type":"ContainerStarted","Data":"95c7bd70d54045c672ea556892da38c4d926136baab2b42bdd0ccbce90984a6a"} Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.328798 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" podUID="8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5" Jan 27 18:56:45 crc kubenswrapper[4853]: I0127 18:56:45.346357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" event={"ID":"aacb2032-25f3-4faf-a0ca-f980411b4ae2","Type":"ContainerStarted","Data":"1d6cec9dcd343edaf16eab745aa76057e41c047d15638a15c689dff2f6c6b13c"} Jan 27 18:56:45 crc kubenswrapper[4853]: E0127 18:56:45.348671 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" podUID="aacb2032-25f3-4faf-a0ca-f980411b4ae2" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.357069 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" podUID="aacb2032-25f3-4faf-a0ca-f980411b4ae2" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.357596 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" podUID="e32b4f39-5c23-4e91-92bc-ffd6b7694a5a" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.362982 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" podUID="5b33f408-e905-4298-adfc-b113f89ecd36" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.363042 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" podUID="8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.363077 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" podUID="d9757c33-a50c-4fa4-ab8d-270c2bed1459" Jan 27 18:56:46 crc kubenswrapper[4853]: I0127 18:56:46.454629 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.455438 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.455526 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert podName:613d8e60-1314-45a2-8bcc-250151f708d1 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:50.455492827 +0000 UTC m=+852.918035710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert") pod "infra-operator-controller-manager-7d75bc88d5-qrs25" (UID: "613d8e60-1314-45a2-8bcc-250151f708d1") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:46 crc kubenswrapper[4853]: I0127 18:56:46.758384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.758563 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:46 crc kubenswrapper[4853]: E0127 18:56:46.758644 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert podName:9bd5a06a-f084-42ba-8f88-9be1cee0554a nodeName:}" failed. No retries permitted until 2026-01-27 18:56:50.758625985 +0000 UTC m=+853.221168868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" (UID: "9bd5a06a-f084-42ba-8f88-9be1cee0554a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:47 crc kubenswrapper[4853]: I0127 18:56:47.062615 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:47 crc kubenswrapper[4853]: I0127 18:56:47.062708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:47 crc kubenswrapper[4853]: E0127 18:56:47.062797 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:56:47 crc kubenswrapper[4853]: E0127 18:56:47.062838 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:56:47 crc kubenswrapper[4853]: E0127 18:56:47.062867 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:51.062849305 +0000 UTC m=+853.525392188 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "webhook-server-cert" not found Jan 27 18:56:47 crc kubenswrapper[4853]: E0127 18:56:47.062888 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:51.062874646 +0000 UTC m=+853.525417529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "metrics-server-cert" not found Jan 27 18:56:50 crc kubenswrapper[4853]: I0127 18:56:50.604786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:50 crc kubenswrapper[4853]: E0127 18:56:50.604993 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:50 crc kubenswrapper[4853]: E0127 18:56:50.605245 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert podName:613d8e60-1314-45a2-8bcc-250151f708d1 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:58.605221329 +0000 UTC m=+861.067764212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert") pod "infra-operator-controller-manager-7d75bc88d5-qrs25" (UID: "613d8e60-1314-45a2-8bcc-250151f708d1") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:56:50 crc kubenswrapper[4853]: I0127 18:56:50.807469 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:50 crc kubenswrapper[4853]: E0127 18:56:50.807634 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:50 crc kubenswrapper[4853]: E0127 18:56:50.807945 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert podName:9bd5a06a-f084-42ba-8f88-9be1cee0554a nodeName:}" failed. No retries permitted until 2026-01-27 18:56:58.807924712 +0000 UTC m=+861.270467595 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" (UID: "9bd5a06a-f084-42ba-8f88-9be1cee0554a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:56:51 crc kubenswrapper[4853]: I0127 18:56:51.111235 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:51 crc kubenswrapper[4853]: I0127 18:56:51.111328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:51 crc kubenswrapper[4853]: E0127 18:56:51.111392 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:56:51 crc kubenswrapper[4853]: E0127 18:56:51.111439 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:56:51 crc kubenswrapper[4853]: E0127 18:56:51.111465 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:59.111447912 +0000 UTC m=+861.573990795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "metrics-server-cert" not found Jan 27 18:56:51 crc kubenswrapper[4853]: E0127 18:56:51.111511 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:56:59.111493624 +0000 UTC m=+861.574036577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "webhook-server-cert" not found Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.345328 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa" Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.345808 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5hjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-768b776ffb-dj2lw_openstack-operators(e279285c-c536-46b4-b133-7c23811a725a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.354346 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" podUID="e279285c-c536-46b4-b133-7c23811a725a" Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.429139 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" podUID="e279285c-c536-46b4-b133-7c23811a725a" Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.823663 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.823838 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brx4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2ls6m_openstack-operators(98c9ef8d-ccf0-4c4e-83f3-53451532f0ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:56:57 crc kubenswrapper[4853]: E0127 18:56:57.825016 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" podUID="98c9ef8d-ccf0-4c4e-83f3-53451532f0ad" Jan 27 18:56:58 crc kubenswrapper[4853]: E0127 18:56:58.386193 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 18:56:58 crc kubenswrapper[4853]: E0127 18:56:58.386682 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9dwkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-flf9f_openstack-operators(bee4ca26-dd1a-4747-8bf3-f152d8236270): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:56:58 crc kubenswrapper[4853]: E0127 18:56:58.388611 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" podUID="bee4ca26-dd1a-4747-8bf3-f152d8236270" Jan 27 18:56:58 crc kubenswrapper[4853]: E0127 18:56:58.435997 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" podUID="bee4ca26-dd1a-4747-8bf3-f152d8236270" Jan 27 18:56:58 crc kubenswrapper[4853]: E0127 18:56:58.436069 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" podUID="98c9ef8d-ccf0-4c4e-83f3-53451532f0ad" Jan 27 18:56:58 crc kubenswrapper[4853]: I0127 18:56:58.627155 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:58 crc kubenswrapper[4853]: I0127 18:56:58.649587 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/613d8e60-1314-45a2-8bcc-250151f708d1-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-qrs25\" (UID: \"613d8e60-1314-45a2-8bcc-250151f708d1\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:58 crc kubenswrapper[4853]: I0127 18:56:58.829798 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:58 crc kubenswrapper[4853]: I0127 18:56:58.834811 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd5a06a-f084-42ba-8f88-9be1cee0554a-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx\" (UID: \"9bd5a06a-f084-42ba-8f88-9be1cee0554a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:58 crc kubenswrapper[4853]: I0127 18:56:58.842147 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:56:58 crc kubenswrapper[4853]: I0127 18:56:58.942046 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.133694 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.133930 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:59 crc kubenswrapper[4853]: E0127 18:56:59.134149 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:56:59 crc kubenswrapper[4853]: E0127 18:56:59.134235 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs podName:fede2ab9-a2b5-45f5-bac7-daa8d576d23f nodeName:}" failed. No retries permitted until 2026-01-27 18:57:15.134217257 +0000 UTC m=+877.596760140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-kb6wk" (UID: "fede2ab9-a2b5-45f5-bac7-daa8d576d23f") : secret "webhook-server-cert" not found Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.138747 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.441637 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" event={"ID":"0ee7eba6-8efe-4de9-bb26-69c5b47d0312","Type":"ContainerStarted","Data":"ec4f380667d1f8bb97a72573c782b6a99b7428681a353aaccff59a8fdd305656"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.442418 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.445929 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" event={"ID":"f6e35929-3b14-49b4-9e0e-bbebc88c2ce2","Type":"ContainerStarted","Data":"0e8a7a2440c72fb16439565bc1370b1cc30afea5bb6fa6a06c217b9fcd542d84"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.446040 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.449860 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" event={"ID":"f593e788-ce4a-47ad-a08c-96e1ec0cc92c","Type":"ContainerStarted","Data":"20bfa5615b9ebe1d4cc49b36d0b5c02c11d554fa9c6a9103e6e2294ccbb95c1c"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.449974 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.451343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" event={"ID":"85e5832e-902f-4f65-b659-60abf5d14654","Type":"ContainerStarted","Data":"cd9374c387ec51df0f1ac89b64bc53260f0e3d4180e272709a40dde025b1d39a"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.451434 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.452691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" event={"ID":"0a29796d-a7c3-480a-8379-4d4e7731d5b3","Type":"ContainerStarted","Data":"f7d4eacec9d972c4495251a946ea402eca23b4b389c1fa6841b820e9e2e49c76"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.452758 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.455167 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" event={"ID":"89a3cd80-89b0-41f9-a469-ef001d9be747","Type":"ContainerStarted","Data":"5c5a159e1cdc643560810f358a753bd757951634c2e1f8325caf256b73806b30"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.455369 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.458658 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" podStartSLOduration=2.989107596 podStartE2EDuration="17.45863545s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:43.90085042 +0000 UTC m=+846.363393303" lastFinishedPulling="2026-01-27 18:56:58.370378274 +0000 UTC m=+860.832921157" observedRunningTime="2026-01-27 18:56:59.454685028 +0000 UTC m=+861.917227931" watchObservedRunningTime="2026-01-27 18:56:59.45863545 +0000 UTC m=+861.921178333" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.459985 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" event={"ID":"7d1a71be-07cb-43e0-8584-75e5c48f4175","Type":"ContainerStarted","Data":"5abf98542ff364aec8f77c8bc5f1f12ea76f8f06f88b42250c021ccffb876f8f"} Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.461062 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.502106 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" podStartSLOduration=3.329213724 podStartE2EDuration="17.502085751s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.17928401 +0000 UTC m=+846.641826893" lastFinishedPulling="2026-01-27 18:56:58.352156037 +0000 UTC m=+860.814698920" observedRunningTime="2026-01-27 18:56:59.472732659 +0000 UTC m=+861.935275542" watchObservedRunningTime="2026-01-27 18:56:59.502085751 +0000 UTC m=+861.964628634" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.522533 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" podStartSLOduration=3.33719803 podStartE2EDuration="17.52250904s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.169162923 +0000 UTC m=+846.631705806" lastFinishedPulling="2026-01-27 18:56:58.354473933 +0000 UTC m=+860.817016816" observedRunningTime="2026-01-27 18:56:59.494558098 +0000 UTC m=+861.957100981" watchObservedRunningTime="2026-01-27 18:56:59.52250904 +0000 UTC m=+861.985051923" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.530168 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" podStartSLOduration=3.479635186 podStartE2EDuration="17.530144656s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.301912515 +0000 UTC m=+846.764455418" lastFinishedPulling="2026-01-27 18:56:58.352422005 +0000 UTC m=+860.814964888" observedRunningTime="2026-01-27 18:56:59.51969446 +0000 UTC m=+861.982237343" watchObservedRunningTime="2026-01-27 18:56:59.530144656 +0000 UTC m=+861.992687539" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.544242 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" podStartSLOduration=3.007171928 podStartE2EDuration="17.544215115s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:43.815760399 +0000 UTC m=+846.278303282" lastFinishedPulling="2026-01-27 18:56:58.352803586 +0000 UTC m=+860.815346469" observedRunningTime="2026-01-27 18:56:59.540427907 +0000 UTC m=+862.002970790" watchObservedRunningTime="2026-01-27 18:56:59.544215115 +0000 UTC m=+862.006757998" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.554031 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" podStartSLOduration=3.119278845 podStartE2EDuration="17.554012812s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:43.9184804 +0000 UTC m=+846.381023283" lastFinishedPulling="2026-01-27 18:56:58.353214367 +0000 UTC m=+860.815757250" observedRunningTime="2026-01-27 18:56:59.551716647 +0000 UTC m=+862.014259550" watchObservedRunningTime="2026-01-27 18:56:59.554012812 +0000 UTC m=+862.016555705" Jan 27 18:56:59 crc kubenswrapper[4853]: I0127 18:56:59.574274 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" podStartSLOduration=3.357137334 podStartE2EDuration="17.574253966s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.153854429 +0000 UTC m=+846.616397312" lastFinishedPulling="2026-01-27 18:56:58.370971061 +0000 UTC m=+860.833513944" observedRunningTime="2026-01-27 18:56:59.566948689 +0000 UTC m=+862.029491572" watchObservedRunningTime="2026-01-27 18:56:59.574253966 +0000 UTC m=+862.036796849" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.380882 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx"] Jan 27 18:57:01 crc kubenswrapper[4853]: W0127 18:57:01.391097 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bd5a06a_f084_42ba_8f88_9be1cee0554a.slice/crio-e981d48dac65abb4286387c2bce14c8e51a097e1c1ca30bf43b297954b44e167 WatchSource:0}: Error finding container e981d48dac65abb4286387c2bce14c8e51a097e1c1ca30bf43b297954b44e167: Status 404 returned error can't find the container with id e981d48dac65abb4286387c2bce14c8e51a097e1c1ca30bf43b297954b44e167 Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.427943 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25"] Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.477102 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" event={"ID":"9bd5a06a-f084-42ba-8f88-9be1cee0554a","Type":"ContainerStarted","Data":"e981d48dac65abb4286387c2bce14c8e51a097e1c1ca30bf43b297954b44e167"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.478469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" event={"ID":"613d8e60-1314-45a2-8bcc-250151f708d1","Type":"ContainerStarted","Data":"fe6754173dd33acc58ccefed618d4351c06796f1ef1cf50e63f40977be325940"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.479946 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" event={"ID":"8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5","Type":"ContainerStarted","Data":"3e141e912e9c120cd05af89ed5a1e135f62e30b6aa5e1367789d501c2bc7c5a4"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.480262 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.481501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" event={"ID":"01b08d09-41bb-4a7a-9af2-7fe597572169","Type":"ContainerStarted","Data":"e39df332aa894869d9e7ee13a6b63d3692c07287bc8979dd0175a0f4ef985c03"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.482322 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.500527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" event={"ID":"5db9a86f-dff3-4c54-a478-79ce384d78f7","Type":"ContainerStarted","Data":"9d6bb910df9e6bceefc721f4ce42ad9d094ecc051d3fb0725a2b59963ee428f9"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.501383 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.507659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" event={"ID":"a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43","Type":"ContainerStarted","Data":"16a496b78eb83d19484959e883df7a8208f5aa1c0db8a775bee84a833762d609"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.507885 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.509034 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" podStartSLOduration=3.372716525 podStartE2EDuration="19.509019317s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.670099087 +0000 UTC m=+847.132641970" lastFinishedPulling="2026-01-27 18:57:00.806401889 +0000 UTC m=+863.268944762" observedRunningTime="2026-01-27 18:57:01.508274296 +0000 UTC m=+863.970817179" watchObservedRunningTime="2026-01-27 18:57:01.509019317 +0000 UTC m=+863.971562190" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.520821 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" event={"ID":"7b18ea7d-8f47-450b-aa4b-0b75fc0c0581","Type":"ContainerStarted","Data":"6cd68018f6d7be817c5cac45b7dba8dbd0142c2097d9ea8ba3ad128c1352d51d"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.521536 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.530534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" event={"ID":"ace486ae-a8c2-4aca-8719-528ecbed879f","Type":"ContainerStarted","Data":"b91210358401a390d051306ee92eed5f8406630db6250da83ba97b7a55f7aab2"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.531153 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.533360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" event={"ID":"e32b4f39-5c23-4e91-92bc-ffd6b7694a5a","Type":"ContainerStarted","Data":"1ac6b53fadaff2df49893455293adece313d5e2f852b3b5fe49711fd935b2199"} Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.533790 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" podStartSLOduration=5.137613824 podStartE2EDuration="19.533779649s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:43.956222739 +0000 UTC m=+846.418765622" lastFinishedPulling="2026-01-27 18:56:58.352388564 +0000 UTC m=+860.814931447" observedRunningTime="2026-01-27 18:57:01.526657327 +0000 UTC m=+863.989200200" watchObservedRunningTime="2026-01-27 18:57:01.533779649 +0000 UTC m=+863.996322532" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.533860 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.565614 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" podStartSLOduration=4.885614894 podStartE2EDuration="19.56559624s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:43.673270482 +0000 UTC m=+846.135813365" lastFinishedPulling="2026-01-27 18:56:58.353251828 +0000 UTC m=+860.815794711" observedRunningTime="2026-01-27 18:57:01.552066347 +0000 UTC m=+864.014609230" watchObservedRunningTime="2026-01-27 18:57:01.56559624 +0000 UTC m=+864.028139123" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.585429 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" podStartSLOduration=3.389551042 podStartE2EDuration="19.585408462s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.669367066 +0000 UTC m=+847.131909949" lastFinishedPulling="2026-01-27 18:57:00.865224486 +0000 UTC m=+863.327767369" observedRunningTime="2026-01-27 18:57:01.583466397 +0000 UTC m=+864.046009300" watchObservedRunningTime="2026-01-27 18:57:01.585408462 +0000 UTC m=+864.047951345" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.589551 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" podStartSLOduration=5.863236255 podStartE2EDuration="19.589538279s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.64336348 +0000 UTC m=+847.105906363" lastFinishedPulling="2026-01-27 18:56:58.369665504 +0000 UTC m=+860.832208387" observedRunningTime="2026-01-27 18:57:01.570405517 +0000 UTC m=+864.032948400" watchObservedRunningTime="2026-01-27 18:57:01.589538279 +0000 UTC m=+864.052081162" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.649796 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" podStartSLOduration=5.428223409 podStartE2EDuration="19.649778366s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.150218756 +0000 UTC m=+846.612761629" lastFinishedPulling="2026-01-27 18:56:58.371773703 +0000 UTC m=+860.834316586" observedRunningTime="2026-01-27 18:57:01.647586054 +0000 UTC m=+864.110128927" watchObservedRunningTime="2026-01-27 18:57:01.649778366 +0000 UTC m=+864.112321249" Jan 27 18:57:01 crc kubenswrapper[4853]: I0127 18:57:01.652558 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" podStartSLOduration=5.471392431 podStartE2EDuration="19.652549264s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.171999483 +0000 UTC m=+846.634542366" lastFinishedPulling="2026-01-27 18:56:58.353156326 +0000 UTC m=+860.815699199" observedRunningTime="2026-01-27 18:57:01.631746985 +0000 UTC m=+864.094289868" watchObservedRunningTime="2026-01-27 18:57:01.652549264 +0000 UTC m=+864.115092147" Jan 27 18:57:03 crc kubenswrapper[4853]: I0127 18:57:03.244835 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-cq4p4" Jan 27 18:57:03 crc kubenswrapper[4853]: I0127 18:57:03.319743 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-dbzp2" Jan 27 18:57:03 crc kubenswrapper[4853]: I0127 18:57:03.417361 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-qgstv" Jan 27 18:57:03 crc kubenswrapper[4853]: I0127 18:57:03.437367 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gl44q" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.609709 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" event={"ID":"5b33f408-e905-4298-adfc-b113f89ecd36","Type":"ContainerStarted","Data":"d044cdfda0ffedbb70b3ece38ebb08d71ab6cd857c4445398898482eef175988"} Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.611343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" event={"ID":"9bd5a06a-f084-42ba-8f88-9be1cee0554a","Type":"ContainerStarted","Data":"62975b7e1466f502d95728fc816ff920d21228ed5c044188218399c0a23da2c8"} Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.611477 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.613789 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" event={"ID":"613d8e60-1314-45a2-8bcc-250151f708d1","Type":"ContainerStarted","Data":"e6dbd5a4ed0c1c930d0b0a8bd933e33769f02c6c5cefbc8aab805911138c5c7f"} Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.613844 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.615722 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" event={"ID":"d9757c33-a50c-4fa4-ab8d-270c2bed1459","Type":"ContainerStarted","Data":"3e9ed704458c1469bc92d73c19999f1399158e23999821cf70b1ccbdf014d4da"} Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.615941 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.617209 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" event={"ID":"aacb2032-25f3-4faf-a0ca-f980411b4ae2","Type":"ContainerStarted","Data":"d2449790ec9963085f88ce8c68868d266259a026cb01eddf0f9c7351891e9bb9"} Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.617451 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.630718 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" podStartSLOduration=4.313732421 podStartE2EDuration="24.630701051s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.705688026 +0000 UTC m=+847.168230909" lastFinishedPulling="2026-01-27 18:57:05.022656656 +0000 UTC m=+867.485199539" observedRunningTime="2026-01-27 18:57:06.629922859 +0000 UTC m=+869.092465752" watchObservedRunningTime="2026-01-27 18:57:06.630701051 +0000 UTC m=+869.093243934" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.649207 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" podStartSLOduration=3.026062413 podStartE2EDuration="24.64903525s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.672846565 +0000 UTC m=+847.135389438" lastFinishedPulling="2026-01-27 18:57:06.295819392 +0000 UTC m=+868.758362275" observedRunningTime="2026-01-27 18:57:06.644945254 +0000 UTC m=+869.107488147" watchObservedRunningTime="2026-01-27 18:57:06.64903525 +0000 UTC m=+869.111578133" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.667736 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" podStartSLOduration=19.816754518 podStartE2EDuration="24.66772066s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:57:01.396209891 +0000 UTC m=+863.858752774" lastFinishedPulling="2026-01-27 18:57:06.247176043 +0000 UTC m=+868.709718916" observedRunningTime="2026-01-27 18:57:06.666915947 +0000 UTC m=+869.129458840" watchObservedRunningTime="2026-01-27 18:57:06.66772066 +0000 UTC m=+869.130263543" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.686035 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" podStartSLOduration=3.165595047 podStartE2EDuration="24.686015628s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.701578469 +0000 UTC m=+847.164121352" lastFinishedPulling="2026-01-27 18:57:06.22199905 +0000 UTC m=+868.684541933" observedRunningTime="2026-01-27 18:57:06.680945474 +0000 UTC m=+869.143488357" watchObservedRunningTime="2026-01-27 18:57:06.686015628 +0000 UTC m=+869.148558501" Jan 27 18:57:06 crc kubenswrapper[4853]: I0127 18:57:06.702254 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" podStartSLOduration=19.879944888 podStartE2EDuration="24.702237818s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:57:01.45052569 +0000 UTC m=+863.913068573" lastFinishedPulling="2026-01-27 18:57:06.27281862 +0000 UTC m=+868.735361503" observedRunningTime="2026-01-27 18:57:06.700669883 +0000 UTC m=+869.163212766" watchObservedRunningTime="2026-01-27 18:57:06.702237818 +0000 UTC m=+869.164780691" Jan 27 18:57:12 crc kubenswrapper[4853]: I0127 18:57:12.693049 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-jh7mx" Jan 27 18:57:12 crc kubenswrapper[4853]: I0127 18:57:12.707511 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-sj29r" Jan 27 18:57:12 crc kubenswrapper[4853]: I0127 18:57:12.717339 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-mn6nj" Jan 27 18:57:12 crc kubenswrapper[4853]: I0127 18:57:12.846960 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-bx595" Jan 27 18:57:12 crc kubenswrapper[4853]: I0127 18:57:12.923278 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4pmv9" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.088210 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-n89p8" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.160609 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.183807 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gvm5r" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.216877 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-mrb2s" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.587371 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-mzmbv" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.631976 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.633987 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bn7wr" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.701930 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-cpsgc" Jan 27 18:57:13 crc kubenswrapper[4853]: I0127 18:57:13.784694 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-d4dcp" Jan 27 18:57:15 crc kubenswrapper[4853]: I0127 18:57:15.206401 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:57:15 crc kubenswrapper[4853]: I0127 18:57:15.214839 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fede2ab9-a2b5-45f5-bac7-daa8d576d23f-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-kb6wk\" (UID: \"fede2ab9-a2b5-45f5-bac7-daa8d576d23f\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:57:15 crc kubenswrapper[4853]: I0127 18:57:15.294592 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:57:15 crc kubenswrapper[4853]: W0127 18:57:15.534519 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfede2ab9_a2b5_45f5_bac7_daa8d576d23f.slice/crio-5184608d6525028634ec9e7c84724b4f03005256a09ec9e9f18e73b767e4361c WatchSource:0}: Error finding container 5184608d6525028634ec9e7c84724b4f03005256a09ec9e9f18e73b767e4361c: Status 404 returned error can't find the container with id 5184608d6525028634ec9e7c84724b4f03005256a09ec9e9f18e73b767e4361c Jan 27 18:57:15 crc kubenswrapper[4853]: I0127 18:57:15.536458 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk"] Jan 27 18:57:15 crc kubenswrapper[4853]: I0127 18:57:15.682306 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" event={"ID":"fede2ab9-a2b5-45f5-bac7-daa8d576d23f","Type":"ContainerStarted","Data":"5184608d6525028634ec9e7c84724b4f03005256a09ec9e9f18e73b767e4361c"} Jan 27 18:57:18 crc kubenswrapper[4853]: I0127 18:57:18.852872 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-qrs25" Jan 27 18:57:18 crc kubenswrapper[4853]: I0127 18:57:18.949560 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx" Jan 27 18:57:21 crc kubenswrapper[4853]: I0127 18:57:20.723832 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" event={"ID":"fede2ab9-a2b5-45f5-bac7-daa8d576d23f","Type":"ContainerStarted","Data":"1564258bd5f53a92753d28023d0b4a3c0e5357e15c7ced16ee7d7d3b36df44fc"} Jan 27 18:57:21 crc kubenswrapper[4853]: I0127 18:57:21.728775 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:57:21 crc kubenswrapper[4853]: I0127 18:57:21.756021 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" podStartSLOduration=38.755993172 podStartE2EDuration="38.755993172s" podCreationTimestamp="2026-01-27 18:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:57:21.753164742 +0000 UTC m=+884.215707625" watchObservedRunningTime="2026-01-27 18:57:21.755993172 +0000 UTC m=+884.218536075" Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.737483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" event={"ID":"98c9ef8d-ccf0-4c4e-83f3-53451532f0ad","Type":"ContainerStarted","Data":"884adb6b36e44f338bbc222b4e846643e37a3bdcc9cad518916e4a2bf3509eb8"} Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.739286 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" event={"ID":"bee4ca26-dd1a-4747-8bf3-f152d8236270","Type":"ContainerStarted","Data":"eee16f3737b2b8a16d2525ce77d06307e3a8611bafe6c1e5ec32bd2c7d61c656"} Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.739622 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.741452 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" event={"ID":"e279285c-c536-46b4-b133-7c23811a725a","Type":"ContainerStarted","Data":"b87489b13ceea1f6d4b7bcec925c6cd43b801e66eecc327aea53a5d1a25ac2c5"} Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.741847 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.760574 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2ls6m" podStartSLOduration=2.585561986 podStartE2EDuration="39.760556805s" podCreationTimestamp="2026-01-27 18:56:43 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.700857969 +0000 UTC m=+847.163400852" lastFinishedPulling="2026-01-27 18:57:21.875852788 +0000 UTC m=+884.338395671" observedRunningTime="2026-01-27 18:57:22.758851187 +0000 UTC m=+885.221394110" watchObservedRunningTime="2026-01-27 18:57:22.760556805 +0000 UTC m=+885.223099688" Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.778637 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" podStartSLOduration=3.565579869 podStartE2EDuration="40.778616907s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:44.663429888 +0000 UTC m=+847.125972771" lastFinishedPulling="2026-01-27 18:57:21.876466926 +0000 UTC m=+884.339009809" observedRunningTime="2026-01-27 18:57:22.773932824 +0000 UTC m=+885.236475717" watchObservedRunningTime="2026-01-27 18:57:22.778616907 +0000 UTC m=+885.241159800" Jan 27 18:57:22 crc kubenswrapper[4853]: I0127 18:57:22.791322 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" podStartSLOduration=2.822107152 podStartE2EDuration="40.791303986s" podCreationTimestamp="2026-01-27 18:56:42 +0000 UTC" firstStartedPulling="2026-01-27 18:56:43.906454819 +0000 UTC m=+846.368997702" lastFinishedPulling="2026-01-27 18:57:21.875651653 +0000 UTC m=+884.338194536" observedRunningTime="2026-01-27 18:57:22.786603513 +0000 UTC m=+885.249146416" watchObservedRunningTime="2026-01-27 18:57:22.791303986 +0000 UTC m=+885.253846869" Jan 27 18:57:25 crc kubenswrapper[4853]: I0127 18:57:25.302943 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-kb6wk" Jan 27 18:57:32 crc kubenswrapper[4853]: I0127 18:57:32.964574 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-dj2lw" Jan 27 18:57:33 crc kubenswrapper[4853]: I0127 18:57:33.282334 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-flf9f" Jan 27 18:57:35 crc kubenswrapper[4853]: I0127 18:57:35.541878 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:57:35 crc kubenswrapper[4853]: I0127 18:57:35.541977 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.815460 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2vxkg"] Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.817371 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.820013 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.820422 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-vx7n7" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.820711 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.824065 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2vxkg"] Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.824379 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.867556 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sfbt8"] Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.868896 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.872035 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 18:57:49 crc kubenswrapper[4853]: I0127 18:57:49.875395 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sfbt8"] Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.015743 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-config\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.016054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-config\") pod \"dnsmasq-dns-675f4bcbfc-2vxkg\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.016181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jcc\" (UniqueName: \"kubernetes.io/projected/ad0731ff-55d1-42e9-993e-606177c224f6-kube-api-access-25jcc\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.016328 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.016430 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqkp\" (UniqueName: \"kubernetes.io/projected/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-kube-api-access-9cqkp\") pod \"dnsmasq-dns-675f4bcbfc-2vxkg\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.117173 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25jcc\" (UniqueName: \"kubernetes.io/projected/ad0731ff-55d1-42e9-993e-606177c224f6-kube-api-access-25jcc\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.117237 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.117295 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cqkp\" (UniqueName: \"kubernetes.io/projected/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-kube-api-access-9cqkp\") pod \"dnsmasq-dns-675f4bcbfc-2vxkg\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.117345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-config\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.117374 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-config\") pod \"dnsmasq-dns-675f4bcbfc-2vxkg\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.118400 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-config\") pod \"dnsmasq-dns-675f4bcbfc-2vxkg\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.118414 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-config\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.118785 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.148865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jcc\" (UniqueName: \"kubernetes.io/projected/ad0731ff-55d1-42e9-993e-606177c224f6-kube-api-access-25jcc\") pod \"dnsmasq-dns-78dd6ddcc-sfbt8\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.148881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cqkp\" (UniqueName: \"kubernetes.io/projected/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-kube-api-access-9cqkp\") pod \"dnsmasq-dns-675f4bcbfc-2vxkg\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.189648 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.442579 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.626942 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sfbt8"] Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.903461 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2vxkg"] Jan 27 18:57:50 crc kubenswrapper[4853]: W0127 18:57:50.908726 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d655dc_4bba_4edf_aa27_aa75d0ae83d7.slice/crio-39c62da8903ba5abc85a85d07e7797116ae0ddc4a2d4e1ea12f10b1938d614b7 WatchSource:0}: Error finding container 39c62da8903ba5abc85a85d07e7797116ae0ddc4a2d4e1ea12f10b1938d614b7: Status 404 returned error can't find the container with id 39c62da8903ba5abc85a85d07e7797116ae0ddc4a2d4e1ea12f10b1938d614b7 Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.918586 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" event={"ID":"ad0731ff-55d1-42e9-993e-606177c224f6","Type":"ContainerStarted","Data":"7e84e51d542588a179c0334282967538d8a9774b345a1c303ad2de53acaf7f5d"} Jan 27 18:57:50 crc kubenswrapper[4853]: I0127 18:57:50.919458 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" event={"ID":"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7","Type":"ContainerStarted","Data":"39c62da8903ba5abc85a85d07e7797116ae0ddc4a2d4e1ea12f10b1938d614b7"} Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.684791 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9bpxz"] Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.686053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.693958 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9bpxz"] Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.848538 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-utilities\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.848584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-catalog-content\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.854039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgmp\" (UniqueName: \"kubernetes.io/projected/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-kube-api-access-nwgmp\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.956807 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-utilities\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.956859 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-catalog-content\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.956905 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgmp\" (UniqueName: \"kubernetes.io/projected/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-kube-api-access-nwgmp\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.957408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-utilities\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.957442 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-catalog-content\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:51 crc kubenswrapper[4853]: I0127 18:57:51.981167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgmp\" (UniqueName: \"kubernetes.io/projected/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-kube-api-access-nwgmp\") pod \"redhat-operators-9bpxz\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.067198 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.476735 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2vxkg"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.500878 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-z9c58"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.501977 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.523799 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-z9c58"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.567733 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9bpxz"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.571391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-dns-svc\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.571453 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfs8\" (UniqueName: \"kubernetes.io/projected/5318f74c-0368-48c1-be29-dbb63a36ba18-kube-api-access-rrfs8\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.571495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-config\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.671877 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-dns-svc\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.671922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfs8\" (UniqueName: \"kubernetes.io/projected/5318f74c-0368-48c1-be29-dbb63a36ba18-kube-api-access-rrfs8\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.671949 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-config\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.672787 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-config\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.673084 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-dns-svc\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.691393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfs8\" (UniqueName: \"kubernetes.io/projected/5318f74c-0368-48c1-be29-dbb63a36ba18-kube-api-access-rrfs8\") pod \"dnsmasq-dns-666b6646f7-z9c58\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.806624 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sfbt8"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.837522 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.842071 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qhc9l"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.843894 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.862415 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qhc9l"] Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.880865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-config\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.880938 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7wz\" (UniqueName: \"kubernetes.io/projected/324dd0b6-9b7c-4b19-a069-346afc03f8cc-kube-api-access-vr7wz\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.880972 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.973283 4853 generic.go:334] "Generic (PLEG): container finished" podID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerID="fccea5a0c2cddda404c8371bda30ba89f69e2d470348347e9028c8e44fa37e9b" exitCode=0 Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.973338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerDied","Data":"fccea5a0c2cddda404c8371bda30ba89f69e2d470348347e9028c8e44fa37e9b"} Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.973373 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerStarted","Data":"033330d019ad5567d39c61074818c9ff05223a4cffe739b2e6bbf314f8be482b"} Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.982454 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-config\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.982523 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr7wz\" (UniqueName: \"kubernetes.io/projected/324dd0b6-9b7c-4b19-a069-346afc03f8cc-kube-api-access-vr7wz\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.982553 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.983744 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:52 crc kubenswrapper[4853]: I0127 18:57:52.984085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-config\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.006427 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr7wz\" (UniqueName: \"kubernetes.io/projected/324dd0b6-9b7c-4b19-a069-346afc03f8cc-kube-api-access-vr7wz\") pod \"dnsmasq-dns-57d769cc4f-qhc9l\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.255273 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.399379 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-z9c58"] Jan 27 18:57:53 crc kubenswrapper[4853]: W0127 18:57:53.431973 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5318f74c_0368_48c1_be29_dbb63a36ba18.slice/crio-76253bbb5515da374e92ca2ccb9064f07e816c1ab5f5cef30c03401452db7045 WatchSource:0}: Error finding container 76253bbb5515da374e92ca2ccb9064f07e816c1ab5f5cef30c03401452db7045: Status 404 returned error can't find the container with id 76253bbb5515da374e92ca2ccb9064f07e816c1ab5f5cef30c03401452db7045 Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.551192 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qhc9l"] Jan 27 18:57:53 crc kubenswrapper[4853]: W0127 18:57:53.564907 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod324dd0b6_9b7c_4b19_a069_346afc03f8cc.slice/crio-095e5c3af8bba63f6d45bf70f05b3de078e7dfee3b326a073aee811f3049d2c5 WatchSource:0}: Error finding container 095e5c3af8bba63f6d45bf70f05b3de078e7dfee3b326a073aee811f3049d2c5: Status 404 returned error can't find the container with id 095e5c3af8bba63f6d45bf70f05b3de078e7dfee3b326a073aee811f3049d2c5 Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.666868 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.667912 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.671694 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.671730 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.671877 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.672088 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.672178 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.672535 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rmpnq" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.672896 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.687332 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794501 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794606 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794671 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56570a-76ed-4182-b147-6288fa56d729-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794739 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56570a-76ed-4182-b147-6288fa56d729-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794803 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794872 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.794897 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz2l\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-kube-api-access-pvz2l\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896279 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz2l\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-kube-api-access-pvz2l\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896441 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896504 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.896557 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56570a-76ed-4182-b147-6288fa56d729-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.899232 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.899569 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.900653 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.900992 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.901286 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.901290 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56570a-76ed-4182-b147-6288fa56d729-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.901372 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.908173 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56570a-76ed-4182-b147-6288fa56d729-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.913381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.914107 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.914746 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56570a-76ed-4182-b147-6288fa56d729-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.922447 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz2l\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-kube-api-access-pvz2l\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.958409 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.966870 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.968426 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.971641 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.971898 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.972494 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.972602 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.972722 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.972916 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.974677 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rxrkc" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.990982 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.994448 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.994636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerStarted","Data":"ca7b509fc76d52b70e02e2d2df7444536d8bd5cf242ecd0a63e6549bc8d03725"} Jan 27 18:57:53 crc kubenswrapper[4853]: I0127 18:57:53.996528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" event={"ID":"5318f74c-0368-48c1-be29-dbb63a36ba18","Type":"ContainerStarted","Data":"76253bbb5515da374e92ca2ccb9064f07e816c1ab5f5cef30c03401452db7045"} Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.002167 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" event={"ID":"324dd0b6-9b7c-4b19-a069-346afc03f8cc","Type":"ContainerStarted","Data":"095e5c3af8bba63f6d45bf70f05b3de078e7dfee3b326a073aee811f3049d2c5"} Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106194 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106276 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106344 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnfq8\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-kube-api-access-dnfq8\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106361 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106379 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106416 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106457 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/525d82bf-e147-429f-8915-365aa48be00b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.106471 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/525d82bf-e147-429f-8915-365aa48be00b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207362 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnfq8\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-kube-api-access-dnfq8\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207404 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207536 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/525d82bf-e147-429f-8915-365aa48be00b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207550 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/525d82bf-e147-429f-8915-365aa48be00b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207584 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207627 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.207667 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.216883 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.218189 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.218410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.218979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.219368 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.230595 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.230736 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.231100 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/525d82bf-e147-429f-8915-365aa48be00b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.233547 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.235170 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/525d82bf-e147-429f-8915-365aa48be00b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.235447 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnfq8\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-kube-api-access-dnfq8\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.269499 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.340509 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.469141 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:57:54 crc kubenswrapper[4853]: I0127 18:57:54.931397 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.014459 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"525d82bf-e147-429f-8915-365aa48be00b","Type":"ContainerStarted","Data":"d4d4530f5706893cf0d82d13aedeb6830f182eb813ed14c637fa8560ea1ec824"} Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.017867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56570a-76ed-4182-b147-6288fa56d729","Type":"ContainerStarted","Data":"bf460227b7821518cff8fa0d310537b5b607be2b8bda55d379477cd3ee83de35"} Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.237761 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.241694 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.249903 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.251057 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.251289 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9jqbd" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.251844 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.251992 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.254093 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334027 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334301 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl7zm\" (UniqueName: \"kubernetes.io/projected/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-kube-api-access-pl7zm\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334369 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-config-data-default\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334496 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334516 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.334579 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-kolla-config\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.438088 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-kolla-config\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.443894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-kolla-config\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.444105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.446802 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl7zm\" (UniqueName: \"kubernetes.io/projected/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-kube-api-access-pl7zm\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.446837 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.446924 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.446997 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-config-data-default\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.447018 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.447043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.447301 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.455562 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-config-data-default\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.456693 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.463147 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.472420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl7zm\" (UniqueName: \"kubernetes.io/projected/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-kube-api-access-pl7zm\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.479405 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.504433 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccbab76c-f034-4f3b-9dfe-fcaf98d45d87-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.541353 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87\") " pod="openstack/openstack-galera-0" Jan 27 18:57:55 crc kubenswrapper[4853]: I0127 18:57:55.570433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.062765 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:57:56 crc kubenswrapper[4853]: W0127 18:57:56.069275 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccbab76c_f034_4f3b_9dfe_fcaf98d45d87.slice/crio-5c884d1c3c80f08d8fd979181a987ca0110bcf84f0a006055c0371c41fa293cf WatchSource:0}: Error finding container 5c884d1c3c80f08d8fd979181a987ca0110bcf84f0a006055c0371c41fa293cf: Status 404 returned error can't find the container with id 5c884d1c3c80f08d8fd979181a987ca0110bcf84f0a006055c0371c41fa293cf Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.270659 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7j77j"] Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.272421 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.280588 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7j77j"] Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.375956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-utilities\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.376291 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-catalog-content\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.376479 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lsxm\" (UniqueName: \"kubernetes.io/projected/7aae743e-dcc5-404a-b0db-86933964d549-kube-api-access-8lsxm\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.478267 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lsxm\" (UniqueName: \"kubernetes.io/projected/7aae743e-dcc5-404a-b0db-86933964d549-kube-api-access-8lsxm\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.478333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-utilities\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.478359 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-catalog-content\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.478840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-catalog-content\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.479111 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-utilities\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.504203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lsxm\" (UniqueName: \"kubernetes.io/projected/7aae743e-dcc5-404a-b0db-86933964d549-kube-api-access-8lsxm\") pod \"community-operators-7j77j\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.553773 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.557235 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.565893 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.570099 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.570318 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.570388 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.570327 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pdp96" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.629325 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.691691 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.691838 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6hmm\" (UniqueName: \"kubernetes.io/projected/dbf533bd-2499-4724-b558-cf94c7017f3d-kube-api-access-d6hmm\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.691970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbf533bd-2499-4724-b558-cf94c7017f3d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.692012 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.692088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbf533bd-2499-4724-b558-cf94c7017f3d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.692257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.692389 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.695199 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf533bd-2499-4724-b558-cf94c7017f3d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812155 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf533bd-2499-4724-b558-cf94c7017f3d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812747 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812800 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6hmm\" (UniqueName: \"kubernetes.io/projected/dbf533bd-2499-4724-b558-cf94c7017f3d-kube-api-access-d6hmm\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbf533bd-2499-4724-b558-cf94c7017f3d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.812929 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbf533bd-2499-4724-b558-cf94c7017f3d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.813013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.813492 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.813896 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbf533bd-2499-4724-b558-cf94c7017f3d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.814039 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.814291 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.814381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbf533bd-2499-4724-b558-cf94c7017f3d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.819080 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbf533bd-2499-4724-b558-cf94c7017f3d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.876203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6hmm\" (UniqueName: \"kubernetes.io/projected/dbf533bd-2499-4724-b558-cf94c7017f3d-kube-api-access-d6hmm\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.894453 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf533bd-2499-4724-b558-cf94c7017f3d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.914777 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"dbf533bd-2499-4724-b558-cf94c7017f3d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.987466 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.988455 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.991988 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lnrrc" Jan 27 18:57:56 crc kubenswrapper[4853]: I0127 18:57:56.992301 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.010006 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.020873 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.082715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87","Type":"ContainerStarted","Data":"5c884d1c3c80f08d8fd979181a987ca0110bcf84f0a006055c0371c41fa293cf"} Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.119295 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94965b7d-5efe-4ef3-aadf-41a550c47752-kolla-config\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.119341 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94965b7d-5efe-4ef3-aadf-41a550c47752-combined-ca-bundle\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.119402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/94965b7d-5efe-4ef3-aadf-41a550c47752-memcached-tls-certs\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.119465 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs2p5\" (UniqueName: \"kubernetes.io/projected/94965b7d-5efe-4ef3-aadf-41a550c47752-kube-api-access-zs2p5\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.119499 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94965b7d-5efe-4ef3-aadf-41a550c47752-config-data\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.159428 4853 generic.go:334] "Generic (PLEG): container finished" podID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerID="ca7b509fc76d52b70e02e2d2df7444536d8bd5cf242ecd0a63e6549bc8d03725" exitCode=0 Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.159476 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerDied","Data":"ca7b509fc76d52b70e02e2d2df7444536d8bd5cf242ecd0a63e6549bc8d03725"} Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.190278 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.221030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94965b7d-5efe-4ef3-aadf-41a550c47752-kolla-config\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.221069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94965b7d-5efe-4ef3-aadf-41a550c47752-combined-ca-bundle\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.221152 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/94965b7d-5efe-4ef3-aadf-41a550c47752-memcached-tls-certs\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.221216 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs2p5\" (UniqueName: \"kubernetes.io/projected/94965b7d-5efe-4ef3-aadf-41a550c47752-kube-api-access-zs2p5\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.221252 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94965b7d-5efe-4ef3-aadf-41a550c47752-config-data\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.223828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94965b7d-5efe-4ef3-aadf-41a550c47752-kolla-config\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.225895 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94965b7d-5efe-4ef3-aadf-41a550c47752-config-data\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.244402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/94965b7d-5efe-4ef3-aadf-41a550c47752-memcached-tls-certs\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.244900 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94965b7d-5efe-4ef3-aadf-41a550c47752-combined-ca-bundle\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.276655 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs2p5\" (UniqueName: \"kubernetes.io/projected/94965b7d-5efe-4ef3-aadf-41a550c47752-kube-api-access-zs2p5\") pod \"memcached-0\" (UID: \"94965b7d-5efe-4ef3-aadf-41a550c47752\") " pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.370859 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.500199 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7j77j"] Jan 27 18:57:57 crc kubenswrapper[4853]: I0127 18:57:57.751570 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.040197 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.180432 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerStarted","Data":"b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933"} Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.182157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"94965b7d-5efe-4ef3-aadf-41a550c47752","Type":"ContainerStarted","Data":"0a60469d9e47cb9930a537686fddc4047c20756c2a3ca1418d887a038226ee71"} Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.201577 4853 generic.go:334] "Generic (PLEG): container finished" podID="7aae743e-dcc5-404a-b0db-86933964d549" containerID="df794dc047ffb2659d84137987e91de314294b1ca639da8755d68c350150c0eb" exitCode=0 Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.201655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j77j" event={"ID":"7aae743e-dcc5-404a-b0db-86933964d549","Type":"ContainerDied","Data":"df794dc047ffb2659d84137987e91de314294b1ca639da8755d68c350150c0eb"} Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.201679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j77j" event={"ID":"7aae743e-dcc5-404a-b0db-86933964d549","Type":"ContainerStarted","Data":"246a74a8796a5295e40c7c6a31e566d8eede9122a755a9c1755446e2948bdad6"} Jan 27 18:57:58 crc kubenswrapper[4853]: I0127 18:57:58.331213 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9bpxz" podStartSLOduration=2.592717413 podStartE2EDuration="7.331195881s" podCreationTimestamp="2026-01-27 18:57:51 +0000 UTC" firstStartedPulling="2026-01-27 18:57:52.977416452 +0000 UTC m=+915.439959335" lastFinishedPulling="2026-01-27 18:57:57.71589492 +0000 UTC m=+920.178437803" observedRunningTime="2026-01-27 18:57:58.321373093 +0000 UTC m=+920.783915986" watchObservedRunningTime="2026-01-27 18:57:58.331195881 +0000 UTC m=+920.793738754" Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.179325 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.180932 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.184801 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-55lqm" Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.215991 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.278653 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfj6p\" (UniqueName: \"kubernetes.io/projected/eff8efe8-39b3-4aa6-af17-f40690d3d639-kube-api-access-vfj6p\") pod \"kube-state-metrics-0\" (UID: \"eff8efe8-39b3-4aa6-af17-f40690d3d639\") " pod="openstack/kube-state-metrics-0" Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.380227 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfj6p\" (UniqueName: \"kubernetes.io/projected/eff8efe8-39b3-4aa6-af17-f40690d3d639-kube-api-access-vfj6p\") pod \"kube-state-metrics-0\" (UID: \"eff8efe8-39b3-4aa6-af17-f40690d3d639\") " pod="openstack/kube-state-metrics-0" Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.423164 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfj6p\" (UniqueName: \"kubernetes.io/projected/eff8efe8-39b3-4aa6-af17-f40690d3d639-kube-api-access-vfj6p\") pod \"kube-state-metrics-0\" (UID: \"eff8efe8-39b3-4aa6-af17-f40690d3d639\") " pod="openstack/kube-state-metrics-0" Jan 27 18:57:59 crc kubenswrapper[4853]: I0127 18:57:59.528191 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.068042 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.068455 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.249668 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xkd2q"] Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.250690 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.254724 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-flbrw" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.255143 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.256458 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.260073 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xkd2q"] Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.274761 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qgd5v"] Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.296961 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qgd5v"] Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.297207 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331401 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-run\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-scripts\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331496 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcfl\" (UniqueName: \"kubernetes.io/projected/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-kube-api-access-dfcfl\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331540 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-ovn-controller-tls-certs\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331612 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-lib\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331667 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-log-ovn\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-log\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-etc-ovs\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331758 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-run-ovn\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331791 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-scripts\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331820 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rws\" (UniqueName: \"kubernetes.io/projected/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-kube-api-access-67rws\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-run\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.331885 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-combined-ca-bundle\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434039 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-log\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-etc-ovs\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434143 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-run-ovn\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-scripts\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rws\" (UniqueName: \"kubernetes.io/projected/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-kube-api-access-67rws\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434267 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-run\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434288 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-combined-ca-bundle\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434316 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-run\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434337 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-scripts\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434354 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcfl\" (UniqueName: \"kubernetes.io/projected/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-kube-api-access-dfcfl\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434371 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-ovn-controller-tls-certs\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-lib\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.434427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-log-ovn\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.435015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-log-ovn\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.435531 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-run\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.435655 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-log\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.435789 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-etc-ovs\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.435990 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-run-ovn\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.440469 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-var-lib\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.440542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-var-run\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.442807 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-combined-ca-bundle\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.444736 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-scripts\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.454079 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-scripts\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.460007 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rws\" (UniqueName: \"kubernetes.io/projected/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-kube-api-access-67rws\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.470397 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d52eb59-75a5-4074-8bfb-c9dab8b0c97f-ovn-controller-tls-certs\") pod \"ovn-controller-xkd2q\" (UID: \"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f\") " pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.475862 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcfl\" (UniqueName: \"kubernetes.io/projected/2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704-kube-api-access-dfcfl\") pod \"ovn-controller-ovs-qgd5v\" (UID: \"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704\") " pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.569983 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:02 crc kubenswrapper[4853]: I0127 18:58:02.622486 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.156264 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9bpxz" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="registry-server" probeResult="failure" output=< Jan 27 18:58:03 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 18:58:03 crc kubenswrapper[4853]: > Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.258034 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h84g7"] Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.259413 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.322401 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h84g7"] Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.372618 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-catalog-content\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.372660 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-utilities\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.372698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t2dk\" (UniqueName: \"kubernetes.io/projected/d714f652-b46e-4843-89a9-0503e169cc42-kube-api-access-6t2dk\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.473923 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-catalog-content\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.473980 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-utilities\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.474032 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2dk\" (UniqueName: \"kubernetes.io/projected/d714f652-b46e-4843-89a9-0503e169cc42-kube-api-access-6t2dk\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.474494 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-utilities\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.474576 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-catalog-content\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.497671 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2dk\" (UniqueName: \"kubernetes.io/projected/d714f652-b46e-4843-89a9-0503e169cc42-kube-api-access-6t2dk\") pod \"redhat-marketplace-h84g7\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:03 crc kubenswrapper[4853]: I0127 18:58:03.592231 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.205959 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.207612 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.212219 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.212452 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.212612 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d62vg" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.212780 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.213016 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.224080 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299594 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299617 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368a8f46-825c-43ad-803b-c7fdf6ca048c-config\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299787 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368a8f46-825c-43ad-803b-c7fdf6ca048c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299902 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjt5\" (UniqueName: \"kubernetes.io/projected/368a8f46-825c-43ad-803b-c7fdf6ca048c-kube-api-access-jdjt5\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299935 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/368a8f46-825c-43ad-803b-c7fdf6ca048c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299966 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.299985 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.401899 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368a8f46-825c-43ad-803b-c7fdf6ca048c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjt5\" (UniqueName: \"kubernetes.io/projected/368a8f46-825c-43ad-803b-c7fdf6ca048c-kube-api-access-jdjt5\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/368a8f46-825c-43ad-803b-c7fdf6ca048c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402386 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402462 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402542 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402740 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368a8f46-825c-43ad-803b-c7fdf6ca048c-config\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402813 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/368a8f46-825c-43ad-803b-c7fdf6ca048c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.402897 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.403314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/368a8f46-825c-43ad-803b-c7fdf6ca048c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.403925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/368a8f46-825c-43ad-803b-c7fdf6ca048c-config\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.407945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.409015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.409189 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/368a8f46-825c-43ad-803b-c7fdf6ca048c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.422956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.422995 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjt5\" (UniqueName: \"kubernetes.io/projected/368a8f46-825c-43ad-803b-c7fdf6ca048c-kube-api-access-jdjt5\") pod \"ovsdbserver-sb-0\" (UID: \"368a8f46-825c-43ad-803b-c7fdf6ca048c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.541444 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.541508 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:58:05 crc kubenswrapper[4853]: I0127 18:58:05.541907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.187354 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.188824 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.193034 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.193111 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.193034 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.193042 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jzhks" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.197580 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320244 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320284 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320321 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320566 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320653 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97d9\" (UniqueName: \"kubernetes.io/projected/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-kube-api-access-j97d9\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320725 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.320780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421827 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421862 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421892 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421969 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j97d9\" (UniqueName: \"kubernetes.io/projected/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-kube-api-access-j97d9\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.421997 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.422024 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.422161 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.422546 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.427295 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.429316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.429318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.433321 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.440034 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.444683 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.446907 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j97d9\" (UniqueName: \"kubernetes.io/projected/c1d29cf4-2fdf-46ef-8470-e42a8226dd7c-kube-api-access-j97d9\") pod \"ovsdbserver-nb-0\" (UID: \"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:06 crc kubenswrapper[4853]: I0127 18:58:06.504923 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:09 crc kubenswrapper[4853]: I0127 18:58:09.873211 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:58:10 crc kubenswrapper[4853]: I0127 18:58:10.312263 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbf533bd-2499-4724-b558-cf94c7017f3d","Type":"ContainerStarted","Data":"543ab36705ee398608ab07252fde80c3d14ef7baf6c7283aa12f1633730432cc"} Jan 27 18:58:12 crc kubenswrapper[4853]: I0127 18:58:12.137606 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:58:12 crc kubenswrapper[4853]: I0127 18:58:12.191303 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:58:12 crc kubenswrapper[4853]: I0127 18:58:12.373234 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9bpxz"] Jan 27 18:58:13 crc kubenswrapper[4853]: I0127 18:58:13.332296 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9bpxz" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="registry-server" containerID="cri-o://b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933" gracePeriod=2 Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.340621 4853 generic.go:334] "Generic (PLEG): container finished" podID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerID="b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933" exitCode=0 Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.340881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerDied","Data":"b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933"} Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.776682 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-42tzg"] Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.779970 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.787001 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42tzg"] Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.902393 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssv9\" (UniqueName: \"kubernetes.io/projected/0b26c871-8544-4e5c-b341-95ebab807234-kube-api-access-fssv9\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.902567 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-utilities\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:14 crc kubenswrapper[4853]: I0127 18:58:14.902595 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-catalog-content\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.004396 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-utilities\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.004445 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-catalog-content\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.004468 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fssv9\" (UniqueName: \"kubernetes.io/projected/0b26c871-8544-4e5c-b341-95ebab807234-kube-api-access-fssv9\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.005341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-utilities\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.005606 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-catalog-content\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.033464 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fssv9\" (UniqueName: \"kubernetes.io/projected/0b26c871-8544-4e5c-b341-95ebab807234-kube-api-access-fssv9\") pod \"certified-operators-42tzg\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: I0127 18:58:15.125312 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:15 crc kubenswrapper[4853]: E0127 18:58:15.192649 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 18:58:15 crc kubenswrapper[4853]: E0127 18:58:15.192815 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvz2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2f56570a-76ed-4182-b147-6288fa56d729): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:15 crc kubenswrapper[4853]: E0127 18:58:15.196252 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2f56570a-76ed-4182-b147-6288fa56d729" Jan 27 18:58:15 crc kubenswrapper[4853]: E0127 18:58:15.351437 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2f56570a-76ed-4182-b147-6288fa56d729" Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.068573 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933 is running failed: container process not found" containerID="b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.069072 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933 is running failed: container process not found" containerID="b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.069415 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933 is running failed: container process not found" containerID="b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.069440 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9bpxz" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="registry-server" Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.605564 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.605724 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnfq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(525d82bf-e147-429f-8915-365aa48be00b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:22 crc kubenswrapper[4853]: E0127 18:58:22.607248 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="525d82bf-e147-429f-8915-365aa48be00b" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.690859 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.753087 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-catalog-content\") pod \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.753144 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-utilities\") pod \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.753195 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwgmp\" (UniqueName: \"kubernetes.io/projected/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-kube-api-access-nwgmp\") pod \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\" (UID: \"9549c452-0bc8-4b5c-a4a5-86dfd4572e35\") " Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.755186 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-utilities" (OuterVolumeSpecName: "utilities") pod "9549c452-0bc8-4b5c-a4a5-86dfd4572e35" (UID: "9549c452-0bc8-4b5c-a4a5-86dfd4572e35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.759493 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-kube-api-access-nwgmp" (OuterVolumeSpecName: "kube-api-access-nwgmp") pod "9549c452-0bc8-4b5c-a4a5-86dfd4572e35" (UID: "9549c452-0bc8-4b5c-a4a5-86dfd4572e35"). InnerVolumeSpecName "kube-api-access-nwgmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.855073 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.855103 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwgmp\" (UniqueName: \"kubernetes.io/projected/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-kube-api-access-nwgmp\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.890475 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9549c452-0bc8-4b5c-a4a5-86dfd4572e35" (UID: "9549c452-0bc8-4b5c-a4a5-86dfd4572e35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:22 crc kubenswrapper[4853]: I0127 18:58:22.956758 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9549c452-0bc8-4b5c-a4a5-86dfd4572e35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:23 crc kubenswrapper[4853]: I0127 18:58:23.457906 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bpxz" Jan 27 18:58:23 crc kubenswrapper[4853]: I0127 18:58:23.457916 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bpxz" event={"ID":"9549c452-0bc8-4b5c-a4a5-86dfd4572e35","Type":"ContainerDied","Data":"033330d019ad5567d39c61074818c9ff05223a4cffe739b2e6bbf314f8be482b"} Jan 27 18:58:23 crc kubenswrapper[4853]: I0127 18:58:23.458733 4853 scope.go:117] "RemoveContainer" containerID="b18e78dc65bf423e6704cd76c2e85a89b6d02e44bee8ad28859800a6faf3e933" Jan 27 18:58:23 crc kubenswrapper[4853]: E0127 18:58:23.462511 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="525d82bf-e147-429f-8915-365aa48be00b" Jan 27 18:58:23 crc kubenswrapper[4853]: I0127 18:58:23.524184 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9bpxz"] Jan 27 18:58:23 crc kubenswrapper[4853]: I0127 18:58:23.530016 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9bpxz"] Jan 27 18:58:24 crc kubenswrapper[4853]: I0127 18:58:24.121239 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" path="/var/lib/kubelet/pods/9549c452-0bc8-4b5c-a4a5-86dfd4572e35/volumes" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.239935 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.240503 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrfs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-z9c58_openstack(5318f74c-0368-48c1-be29-dbb63a36ba18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.241710 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.296433 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.296597 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-qhc9l_openstack(324dd0b6-9b7c-4b19-a069-346afc03f8cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.297912 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" podUID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.299165 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.299301 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cqkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-2vxkg_openstack(b0d655dc-4bba-4edf-aa27-aa75d0ae83d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.300462 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" podUID="b0d655dc-4bba-4edf-aa27-aa75d0ae83d7" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.307778 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.307951 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25jcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-sfbt8_openstack(ad0731ff-55d1-42e9-993e-606177c224f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.309415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" podUID="ad0731ff-55d1-42e9-993e-606177c224f6" Jan 27 18:58:24 crc kubenswrapper[4853]: I0127 18:58:24.320302 4853 scope.go:117] "RemoveContainer" containerID="ca7b509fc76d52b70e02e2d2df7444536d8bd5cf242ecd0a63e6549bc8d03725" Jan 27 18:58:24 crc kubenswrapper[4853]: I0127 18:58:24.489640 4853 scope.go:117] "RemoveContainer" containerID="fccea5a0c2cddda404c8371bda30ba89f69e2d470348347e9028c8e44fa37e9b" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.508576 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" podUID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" Jan 27 18:58:24 crc kubenswrapper[4853]: E0127 18:58:24.508829 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" Jan 27 18:58:24 crc kubenswrapper[4853]: I0127 18:58:24.953848 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xkd2q"] Jan 27 18:58:24 crc kubenswrapper[4853]: W0127 18:58:24.958776 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d52eb59_75a5_4074_8bfb_c9dab8b0c97f.slice/crio-0303977bad9731c4cdf7aa8a3c5f942c38a662f8371e5e8f2084af674f0c6bbe WatchSource:0}: Error finding container 0303977bad9731c4cdf7aa8a3c5f942c38a662f8371e5e8f2084af674f0c6bbe: Status 404 returned error can't find the container with id 0303977bad9731c4cdf7aa8a3c5f942c38a662f8371e5e8f2084af674f0c6bbe Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.153174 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h84g7"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.178418 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:58:25 crc kubenswrapper[4853]: W0127 18:58:25.183040 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd714f652_b46e_4843_89a9_0503e169cc42.slice/crio-46312511e4959c7c6952973789432fa079267af62e03dda97e4cedc6191daa52 WatchSource:0}: Error finding container 46312511e4959c7c6952973789432fa079267af62e03dda97e4cedc6191daa52: Status 404 returned error can't find the container with id 46312511e4959c7c6952973789432fa079267af62e03dda97e4cedc6191daa52 Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.185523 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.196286 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.215169 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jcc\" (UniqueName: \"kubernetes.io/projected/ad0731ff-55d1-42e9-993e-606177c224f6-kube-api-access-25jcc\") pod \"ad0731ff-55d1-42e9-993e-606177c224f6\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.215519 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-config\") pod \"ad0731ff-55d1-42e9-993e-606177c224f6\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.215616 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-dns-svc\") pod \"ad0731ff-55d1-42e9-993e-606177c224f6\" (UID: \"ad0731ff-55d1-42e9-993e-606177c224f6\") " Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.215677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-config\") pod \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.215742 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cqkp\" (UniqueName: \"kubernetes.io/projected/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-kube-api-access-9cqkp\") pod \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\" (UID: \"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7\") " Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.216010 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-config" (OuterVolumeSpecName: "config") pod "ad0731ff-55d1-42e9-993e-606177c224f6" (UID: "ad0731ff-55d1-42e9-993e-606177c224f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.216037 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad0731ff-55d1-42e9-993e-606177c224f6" (UID: "ad0731ff-55d1-42e9-993e-606177c224f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.216322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-config" (OuterVolumeSpecName: "config") pod "b0d655dc-4bba-4edf-aa27-aa75d0ae83d7" (UID: "b0d655dc-4bba-4edf-aa27-aa75d0ae83d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.216783 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.216801 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.216813 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0731ff-55d1-42e9-993e-606177c224f6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.245622 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-kube-api-access-9cqkp" (OuterVolumeSpecName: "kube-api-access-9cqkp") pod "b0d655dc-4bba-4edf-aa27-aa75d0ae83d7" (UID: "b0d655dc-4bba-4edf-aa27-aa75d0ae83d7"). InnerVolumeSpecName "kube-api-access-9cqkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.256324 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0731ff-55d1-42e9-993e-606177c224f6-kube-api-access-25jcc" (OuterVolumeSpecName: "kube-api-access-25jcc") pod "ad0731ff-55d1-42e9-993e-606177c224f6" (UID: "ad0731ff-55d1-42e9-993e-606177c224f6"). InnerVolumeSpecName "kube-api-access-25jcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.319182 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cqkp\" (UniqueName: \"kubernetes.io/projected/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7-kube-api-access-9cqkp\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.319214 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25jcc\" (UniqueName: \"kubernetes.io/projected/ad0731ff-55d1-42e9-993e-606177c224f6-kube-api-access-25jcc\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.330757 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42tzg"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.337253 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:58:25 crc kubenswrapper[4853]: W0127 18:58:25.362843 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d29cf4_2fdf_46ef_8470_e42a8226dd7c.slice/crio-de6e3c54585aeebeecbd7236759d5daeac1f6aa40f367d361474795f814e012b WatchSource:0}: Error finding container de6e3c54585aeebeecbd7236759d5daeac1f6aa40f367d361474795f814e012b: Status 404 returned error can't find the container with id de6e3c54585aeebeecbd7236759d5daeac1f6aa40f367d361474795f814e012b Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.419723 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.506489 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.506494 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sfbt8" event={"ID":"ad0731ff-55d1-42e9-993e-606177c224f6","Type":"ContainerDied","Data":"7e84e51d542588a179c0334282967538d8a9774b345a1c303ad2de53acaf7f5d"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.509779 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87","Type":"ContainerStarted","Data":"a540c44d28da9da11ebfc910936f10053208c82ad5bf85f94ff31615561edfa3"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.518359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q" event={"ID":"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f","Type":"ContainerStarted","Data":"0303977bad9731c4cdf7aa8a3c5f942c38a662f8371e5e8f2084af674f0c6bbe"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.525479 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h84g7" event={"ID":"d714f652-b46e-4843-89a9-0503e169cc42","Type":"ContainerStarted","Data":"46312511e4959c7c6952973789432fa079267af62e03dda97e4cedc6191daa52"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.528246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"94965b7d-5efe-4ef3-aadf-41a550c47752","Type":"ContainerStarted","Data":"2b9ff64bdeab3acd709318b8b7777e41d118c378e3c7ce26c4ca852f4448b4b1"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.530492 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.539688 4853 generic.go:334] "Generic (PLEG): container finished" podID="7aae743e-dcc5-404a-b0db-86933964d549" containerID="a91b981619f0dec1014ed4a1acca62c59360d3657e9447cb0fa549a4475e08e0" exitCode=0 Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.539777 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j77j" event={"ID":"7aae743e-dcc5-404a-b0db-86933964d549","Type":"ContainerDied","Data":"a91b981619f0dec1014ed4a1acca62c59360d3657e9447cb0fa549a4475e08e0"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.541213 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eff8efe8-39b3-4aa6-af17-f40690d3d639","Type":"ContainerStarted","Data":"019bc8b3535a1722c74c7318bdb972b30ddb50384af1e501a7efaee1bd94ed13"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.543800 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42tzg" event={"ID":"0b26c871-8544-4e5c-b341-95ebab807234","Type":"ContainerStarted","Data":"888b8e51f6951f8dd6be6f2cfacf0814224d7a01b0853d686fcf17da9c681954"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.547638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbf533bd-2499-4724-b558-cf94c7017f3d","Type":"ContainerStarted","Data":"d429cbcfd888050183d78c055ab94a031b74dae7b3ab31ebd63b91157edb02f6"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.551987 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c","Type":"ContainerStarted","Data":"de6e3c54585aeebeecbd7236759d5daeac1f6aa40f367d361474795f814e012b"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.552690 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.090809266 podStartE2EDuration="29.552672511s" podCreationTimestamp="2026-01-27 18:57:56 +0000 UTC" firstStartedPulling="2026-01-27 18:57:57.869530505 +0000 UTC m=+920.332073388" lastFinishedPulling="2026-01-27 18:58:24.33139375 +0000 UTC m=+946.793936633" observedRunningTime="2026-01-27 18:58:25.551607241 +0000 UTC m=+948.014150144" watchObservedRunningTime="2026-01-27 18:58:25.552672511 +0000 UTC m=+948.015215394" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.557861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"368a8f46-825c-43ad-803b-c7fdf6ca048c","Type":"ContainerStarted","Data":"6b6ff330c4ba27a1e9270d721426df6044e021244b6814d2e32c27ffe7a0ba3f"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.558972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" event={"ID":"b0d655dc-4bba-4edf-aa27-aa75d0ae83d7","Type":"ContainerDied","Data":"39c62da8903ba5abc85a85d07e7797116ae0ddc4a2d4e1ea12f10b1938d614b7"} Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.559048 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2vxkg" Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.687976 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2vxkg"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.693526 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2vxkg"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.731484 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sfbt8"] Jan 27 18:58:25 crc kubenswrapper[4853]: I0127 18:58:25.740138 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sfbt8"] Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.138583 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0731ff-55d1-42e9-993e-606177c224f6" path="/var/lib/kubelet/pods/ad0731ff-55d1-42e9-993e-606177c224f6/volumes" Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.139005 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d655dc-4bba-4edf-aa27-aa75d0ae83d7" path="/var/lib/kubelet/pods/b0d655dc-4bba-4edf-aa27-aa75d0ae83d7/volumes" Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.450405 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qgd5v"] Jan 27 18:58:26 crc kubenswrapper[4853]: W0127 18:58:26.508555 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b33e2a4_b173_4d0a_b3b4_9ee1c3b92704.slice/crio-5997a0c8c5b0da478c6954dba21c4307bdc4190be11efd5205f88d2d2b5bfe10 WatchSource:0}: Error finding container 5997a0c8c5b0da478c6954dba21c4307bdc4190be11efd5205f88d2d2b5bfe10: Status 404 returned error can't find the container with id 5997a0c8c5b0da478c6954dba21c4307bdc4190be11efd5205f88d2d2b5bfe10 Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.592516 4853 generic.go:334] "Generic (PLEG): container finished" podID="d714f652-b46e-4843-89a9-0503e169cc42" containerID="c005e59a1cd54ad5321354601b24185eacff748d530059e0477027d6cfb9c31d" exitCode=0 Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.593079 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h84g7" event={"ID":"d714f652-b46e-4843-89a9-0503e169cc42","Type":"ContainerDied","Data":"c005e59a1cd54ad5321354601b24185eacff748d530059e0477027d6cfb9c31d"} Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.599357 4853 generic.go:334] "Generic (PLEG): container finished" podID="0b26c871-8544-4e5c-b341-95ebab807234" containerID="ba05453d834bb2d6f7e063e52a6204df3beaa4787b68a1c79b9614cce6037e59" exitCode=0 Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.599480 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42tzg" event={"ID":"0b26c871-8544-4e5c-b341-95ebab807234","Type":"ContainerDied","Data":"ba05453d834bb2d6f7e063e52a6204df3beaa4787b68a1c79b9614cce6037e59"} Jan 27 18:58:26 crc kubenswrapper[4853]: I0127 18:58:26.603565 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qgd5v" event={"ID":"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704","Type":"ContainerStarted","Data":"5997a0c8c5b0da478c6954dba21c4307bdc4190be11efd5205f88d2d2b5bfe10"} Jan 27 18:58:28 crc kubenswrapper[4853]: I0127 18:58:28.617431 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbf533bd-2499-4724-b558-cf94c7017f3d" containerID="d429cbcfd888050183d78c055ab94a031b74dae7b3ab31ebd63b91157edb02f6" exitCode=0 Jan 27 18:58:28 crc kubenswrapper[4853]: I0127 18:58:28.617511 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbf533bd-2499-4724-b558-cf94c7017f3d","Type":"ContainerDied","Data":"d429cbcfd888050183d78c055ab94a031b74dae7b3ab31ebd63b91157edb02f6"} Jan 27 18:58:28 crc kubenswrapper[4853]: I0127 18:58:28.620362 4853 generic.go:334] "Generic (PLEG): container finished" podID="ccbab76c-f034-4f3b-9dfe-fcaf98d45d87" containerID="a540c44d28da9da11ebfc910936f10053208c82ad5bf85f94ff31615561edfa3" exitCode=0 Jan 27 18:58:28 crc kubenswrapper[4853]: I0127 18:58:28.620412 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87","Type":"ContainerDied","Data":"a540c44d28da9da11ebfc910936f10053208c82ad5bf85f94ff31615561edfa3"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.644376 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ccbab76c-f034-4f3b-9dfe-fcaf98d45d87","Type":"ContainerStarted","Data":"ad2a7a8801498f6d346bfc38f9ef6a03e2c78fde4bb16d6332f3cbc32ac19146"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.646682 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"368a8f46-825c-43ad-803b-c7fdf6ca048c","Type":"ContainerStarted","Data":"cb951197749143d1534e779b961cef17e5dd4255ddc85cb46c66815f27c3c949"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.651240 4853 generic.go:334] "Generic (PLEG): container finished" podID="d714f652-b46e-4843-89a9-0503e169cc42" containerID="c431c883bc871bd956051bb74438b9dd533e67d2aa71b4456f89aada3b724ab4" exitCode=0 Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.651335 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h84g7" event={"ID":"d714f652-b46e-4843-89a9-0503e169cc42","Type":"ContainerDied","Data":"c431c883bc871bd956051bb74438b9dd533e67d2aa71b4456f89aada3b724ab4"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.654888 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j77j" event={"ID":"7aae743e-dcc5-404a-b0db-86933964d549","Type":"ContainerStarted","Data":"8f4efe9c86ef20478baa94371a9cc4a0b682b57afe56cf502843af135d441897"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.664601 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eff8efe8-39b3-4aa6-af17-f40690d3d639","Type":"ContainerStarted","Data":"e3254e87d7b580952e660ddcc82a0b8104ea01ae613b0f53522b2154e14c56c9"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.665375 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.667635 4853 generic.go:334] "Generic (PLEG): container finished" podID="0b26c871-8544-4e5c-b341-95ebab807234" containerID="cfa825982ed017dc7cca517fdca19495d1fe991c89c310a4fc9d6c10784bd758" exitCode=0 Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.667693 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42tzg" event={"ID":"0b26c871-8544-4e5c-b341-95ebab807234","Type":"ContainerDied","Data":"cfa825982ed017dc7cca517fdca19495d1fe991c89c310a4fc9d6c10784bd758"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.670934 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.518823854 podStartE2EDuration="36.670914604s" podCreationTimestamp="2026-01-27 18:57:54 +0000 UTC" firstStartedPulling="2026-01-27 18:57:56.079055408 +0000 UTC m=+918.541598291" lastFinishedPulling="2026-01-27 18:58:24.231146158 +0000 UTC m=+946.693689041" observedRunningTime="2026-01-27 18:58:30.665088569 +0000 UTC m=+953.127631452" watchObservedRunningTime="2026-01-27 18:58:30.670914604 +0000 UTC m=+953.133457487" Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.676657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"dbf533bd-2499-4724-b558-cf94c7017f3d","Type":"ContainerStarted","Data":"c0f2f23c75696b9f80b527db430d3174417a03a439e5ca7adf9bf7302b15a4b3"} Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.689435 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7j77j" podStartSLOduration=17.654837068 podStartE2EDuration="34.689418218s" podCreationTimestamp="2026-01-27 18:57:56 +0000 UTC" firstStartedPulling="2026-01-27 18:58:09.872646891 +0000 UTC m=+932.335189804" lastFinishedPulling="2026-01-27 18:58:26.907228071 +0000 UTC m=+949.369770954" observedRunningTime="2026-01-27 18:58:30.6887722 +0000 UTC m=+953.151315093" watchObservedRunningTime="2026-01-27 18:58:30.689418218 +0000 UTC m=+953.151961101" Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.733689 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=26.962139238 podStartE2EDuration="31.733666243s" podCreationTimestamp="2026-01-27 18:57:59 +0000 UTC" firstStartedPulling="2026-01-27 18:58:25.245344749 +0000 UTC m=+947.707887632" lastFinishedPulling="2026-01-27 18:58:30.016871754 +0000 UTC m=+952.479414637" observedRunningTime="2026-01-27 18:58:30.712530824 +0000 UTC m=+953.175073727" watchObservedRunningTime="2026-01-27 18:58:30.733666243 +0000 UTC m=+953.196209126" Jan 27 18:58:30 crc kubenswrapper[4853]: I0127 18:58:30.785589 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.317931633 podStartE2EDuration="35.785564294s" podCreationTimestamp="2026-01-27 18:57:55 +0000 UTC" firstStartedPulling="2026-01-27 18:58:09.890534348 +0000 UTC m=+932.353077231" lastFinishedPulling="2026-01-27 18:58:24.358167009 +0000 UTC m=+946.820709892" observedRunningTime="2026-01-27 18:58:30.775871219 +0000 UTC m=+953.238414102" watchObservedRunningTime="2026-01-27 18:58:30.785564294 +0000 UTC m=+953.248107177" Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.693594 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42tzg" event={"ID":"0b26c871-8544-4e5c-b341-95ebab807234","Type":"ContainerStarted","Data":"79e379dee8fe8dd6cf75e7b13c778671d55b8576f9b25c06ae3385de52c893ac"} Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.699673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c","Type":"ContainerStarted","Data":"85c01448aa1fd8da8117d028cbe8ef37715eeee66f8ad54e692da16c3f2097e0"} Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.703748 4853 generic.go:334] "Generic (PLEG): container finished" podID="2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704" containerID="276b1aa07b17d9101e7299362302af02ed575577d2d1ea24401bedd8d7a2b67c" exitCode=0 Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.703893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qgd5v" event={"ID":"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704","Type":"ContainerDied","Data":"276b1aa07b17d9101e7299362302af02ed575577d2d1ea24401bedd8d7a2b67c"} Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.708576 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q" event={"ID":"4d52eb59-75a5-4074-8bfb-c9dab8b0c97f","Type":"ContainerStarted","Data":"0211f760fcb9359dd4bbf32bd992b9ad677aed1c7d14a8c0a5894f6aad01d72a"} Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.708922 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xkd2q" Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.739112 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-42tzg" podStartSLOduration=12.224330562 podStartE2EDuration="17.730492201s" podCreationTimestamp="2026-01-27 18:58:14 +0000 UTC" firstStartedPulling="2026-01-27 18:58:25.546030913 +0000 UTC m=+948.008573796" lastFinishedPulling="2026-01-27 18:58:31.052192552 +0000 UTC m=+953.514735435" observedRunningTime="2026-01-27 18:58:31.713820079 +0000 UTC m=+954.176362962" watchObservedRunningTime="2026-01-27 18:58:31.730492201 +0000 UTC m=+954.193035084" Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.747322 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h84g7" event={"ID":"d714f652-b46e-4843-89a9-0503e169cc42","Type":"ContainerStarted","Data":"e4cdc1f572511f1fcc43e48c70199665df037cd2d27b84cddded5e5f4290ab5b"} Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.774052 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56570a-76ed-4182-b147-6288fa56d729","Type":"ContainerStarted","Data":"86b479d5fa65aa088a46498e1a0e6a8485fbcdd68b9fc36a9d1790fcd627a629"} Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.800579 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xkd2q" podStartSLOduration=24.767906981 podStartE2EDuration="29.800554827s" podCreationTimestamp="2026-01-27 18:58:02 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.960756442 +0000 UTC m=+947.423299325" lastFinishedPulling="2026-01-27 18:58:29.993404288 +0000 UTC m=+952.455947171" observedRunningTime="2026-01-27 18:58:31.753981827 +0000 UTC m=+954.216524700" watchObservedRunningTime="2026-01-27 18:58:31.800554827 +0000 UTC m=+954.263097710" Jan 27 18:58:31 crc kubenswrapper[4853]: I0127 18:58:31.820976 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h84g7" podStartSLOduration=24.385038276 podStartE2EDuration="28.820944295s" podCreationTimestamp="2026-01-27 18:58:03 +0000 UTC" firstStartedPulling="2026-01-27 18:58:26.626306457 +0000 UTC m=+949.088849340" lastFinishedPulling="2026-01-27 18:58:31.062212476 +0000 UTC m=+953.524755359" observedRunningTime="2026-01-27 18:58:31.795653858 +0000 UTC m=+954.258196751" watchObservedRunningTime="2026-01-27 18:58:31.820944295 +0000 UTC m=+954.283487168" Jan 27 18:58:32 crc kubenswrapper[4853]: I0127 18:58:32.374965 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 18:58:32 crc kubenswrapper[4853]: I0127 18:58:32.791068 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qgd5v" event={"ID":"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704","Type":"ContainerStarted","Data":"697221de4cfe24295c31f84ad62700f99195648bb0a68007b88b153472943c67"} Jan 27 18:58:32 crc kubenswrapper[4853]: I0127 18:58:32.792086 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:32 crc kubenswrapper[4853]: I0127 18:58:32.792138 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qgd5v" event={"ID":"2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704","Type":"ContainerStarted","Data":"781a4b6443651b0b063aeb9c2b9f9fce52ac497eb94990c766405fe0f715a509"} Jan 27 18:58:32 crc kubenswrapper[4853]: I0127 18:58:32.815207 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qgd5v" podStartSLOduration=27.333135811 podStartE2EDuration="30.815183911s" podCreationTimestamp="2026-01-27 18:58:02 +0000 UTC" firstStartedPulling="2026-01-27 18:58:26.511447131 +0000 UTC m=+948.973990024" lastFinishedPulling="2026-01-27 18:58:29.993495241 +0000 UTC m=+952.456038124" observedRunningTime="2026-01-27 18:58:32.812233137 +0000 UTC m=+955.274776040" watchObservedRunningTime="2026-01-27 18:58:32.815183911 +0000 UTC m=+955.277726794" Jan 27 18:58:33 crc kubenswrapper[4853]: I0127 18:58:33.594983 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:33 crc kubenswrapper[4853]: I0127 18:58:33.595249 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:33 crc kubenswrapper[4853]: I0127 18:58:33.638389 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:33 crc kubenswrapper[4853]: I0127 18:58:33.817241 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.126467 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.127222 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.184309 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.541587 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.541639 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.541682 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.542586 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31e88473416602e1651b8a73df75f161960712c5955c442cb3ea237f2fe7ca04"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.542653 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://31e88473416602e1651b8a73df75f161960712c5955c442cb3ea237f2fe7ca04" gracePeriod=600 Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.571543 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.571585 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.634745 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.831104 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="31e88473416602e1651b8a73df75f161960712c5955c442cb3ea237f2fe7ca04" exitCode=0 Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.831276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"31e88473416602e1651b8a73df75f161960712c5955c442cb3ea237f2fe7ca04"} Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.831511 4853 scope.go:117] "RemoveContainer" containerID="627fd940f35c1ba5723021e5e015bc2d268e6d0901ac54674b747706a8fc058b" Jan 27 18:58:35 crc kubenswrapper[4853]: I0127 18:58:35.901930 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 18:58:36 crc kubenswrapper[4853]: I0127 18:58:36.630503 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:58:36 crc kubenswrapper[4853]: I0127 18:58:36.630563 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:58:36 crc kubenswrapper[4853]: I0127 18:58:36.673296 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:58:36 crc kubenswrapper[4853]: I0127 18:58:36.875973 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:58:36 crc kubenswrapper[4853]: I0127 18:58:36.881265 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:37 crc kubenswrapper[4853]: I0127 18:58:37.190775 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 18:58:37 crc kubenswrapper[4853]: I0127 18:58:37.190830 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 18:58:37 crc kubenswrapper[4853]: I0127 18:58:37.525763 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42tzg"] Jan 27 18:58:38 crc kubenswrapper[4853]: I0127 18:58:38.854493 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-42tzg" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="registry-server" containerID="cri-o://79e379dee8fe8dd6cf75e7b13c778671d55b8576f9b25c06ae3385de52c893ac" gracePeriod=2 Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.443089 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qhc9l"] Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.492494 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5s7fk"] Jan 27 18:58:39 crc kubenswrapper[4853]: E0127 18:58:39.497500 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="registry-server" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.497542 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="registry-server" Jan 27 18:58:39 crc kubenswrapper[4853]: E0127 18:58:39.497569 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="extract-utilities" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.497579 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="extract-utilities" Jan 27 18:58:39 crc kubenswrapper[4853]: E0127 18:58:39.497603 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="extract-content" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.497610 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="extract-content" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.497828 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9549c452-0bc8-4b5c-a4a5-86dfd4572e35" containerName="registry-server" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.498910 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.501460 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5s7fk"] Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.553862 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.591494 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-config\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.591565 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pz86\" (UniqueName: \"kubernetes.io/projected/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-kube-api-access-2pz86\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.591651 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.693731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.693866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-config\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.693912 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pz86\" (UniqueName: \"kubernetes.io/projected/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-kube-api-access-2pz86\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.741458 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.741576 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-config\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.750087 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pz86\" (UniqueName: \"kubernetes.io/projected/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-kube-api-access-2pz86\") pod \"dnsmasq-dns-7cb5889db5-5s7fk\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.819610 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.922078 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7j77j"] Jan 27 18:58:39 crc kubenswrapper[4853]: I0127 18:58:39.923434 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7j77j" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="registry-server" containerID="cri-o://8f4efe9c86ef20478baa94371a9cc4a0b682b57afe56cf502843af135d441897" gracePeriod=2 Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.589958 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.609386 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.628942 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.656749 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.657055 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7qthm" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.657759 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.657897 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.712438 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1859766-1c8c-471c-bae5-4ae46086e8a5-cache\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.712483 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b1859766-1c8c-471c-bae5-4ae46086e8a5-lock\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.712625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gg4f\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-kube-api-access-5gg4f\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.712762 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.712857 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.712969 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1859766-1c8c-471c-bae5-4ae46086e8a5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.816380 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.816498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.816564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1859766-1c8c-471c-bae5-4ae46086e8a5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.816592 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1859766-1c8c-471c-bae5-4ae46086e8a5-cache\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: E0127 18:58:40.816804 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:58:40 crc kubenswrapper[4853]: E0127 18:58:40.816845 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:58:40 crc kubenswrapper[4853]: E0127 18:58:40.816913 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift podName:b1859766-1c8c-471c-bae5-4ae46086e8a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:58:41.316890725 +0000 UTC m=+963.779433598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift") pod "swift-storage-0" (UID: "b1859766-1c8c-471c-bae5-4ae46086e8a5") : configmap "swift-ring-files" not found Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.816845 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b1859766-1c8c-471c-bae5-4ae46086e8a5-lock\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.817186 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.817374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b1859766-1c8c-471c-bae5-4ae46086e8a5-lock\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.817692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1859766-1c8c-471c-bae5-4ae46086e8a5-cache\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.817730 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gg4f\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-kube-api-access-5gg4f\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.832739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1859766-1c8c-471c-bae5-4ae46086e8a5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.837985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.844148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gg4f\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-kube-api-access-5gg4f\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.872327 4853 generic.go:334] "Generic (PLEG): container finished" podID="7aae743e-dcc5-404a-b0db-86933964d549" containerID="8f4efe9c86ef20478baa94371a9cc4a0b682b57afe56cf502843af135d441897" exitCode=0 Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.872401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j77j" event={"ID":"7aae743e-dcc5-404a-b0db-86933964d549","Type":"ContainerDied","Data":"8f4efe9c86ef20478baa94371a9cc4a0b682b57afe56cf502843af135d441897"} Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.875451 4853 generic.go:334] "Generic (PLEG): container finished" podID="0b26c871-8544-4e5c-b341-95ebab807234" containerID="79e379dee8fe8dd6cf75e7b13c778671d55b8576f9b25c06ae3385de52c893ac" exitCode=0 Jan 27 18:58:40 crc kubenswrapper[4853]: I0127 18:58:40.875501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42tzg" event={"ID":"0b26c871-8544-4e5c-b341-95ebab807234","Type":"ContainerDied","Data":"79e379dee8fe8dd6cf75e7b13c778671d55b8576f9b25c06ae3385de52c893ac"} Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.054775 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.122932 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lsxm\" (UniqueName: \"kubernetes.io/projected/7aae743e-dcc5-404a-b0db-86933964d549-kube-api-access-8lsxm\") pod \"7aae743e-dcc5-404a-b0db-86933964d549\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.123405 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-catalog-content\") pod \"7aae743e-dcc5-404a-b0db-86933964d549\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.124508 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-utilities" (OuterVolumeSpecName: "utilities") pod "7aae743e-dcc5-404a-b0db-86933964d549" (UID: "7aae743e-dcc5-404a-b0db-86933964d549"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.124559 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-utilities\") pod \"7aae743e-dcc5-404a-b0db-86933964d549\" (UID: \"7aae743e-dcc5-404a-b0db-86933964d549\") " Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.124955 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.126900 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aae743e-dcc5-404a-b0db-86933964d549-kube-api-access-8lsxm" (OuterVolumeSpecName: "kube-api-access-8lsxm") pod "7aae743e-dcc5-404a-b0db-86933964d549" (UID: "7aae743e-dcc5-404a-b0db-86933964d549"). InnerVolumeSpecName "kube-api-access-8lsxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.205332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aae743e-dcc5-404a-b0db-86933964d549" (UID: "7aae743e-dcc5-404a-b0db-86933964d549"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.227005 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lsxm\" (UniqueName: \"kubernetes.io/projected/7aae743e-dcc5-404a-b0db-86933964d549-kube-api-access-8lsxm\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.227039 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aae743e-dcc5-404a-b0db-86933964d549-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.236134 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pfpph"] Jan 27 18:58:41 crc kubenswrapper[4853]: E0127 18:58:41.236528 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="registry-server" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.236543 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="registry-server" Jan 27 18:58:41 crc kubenswrapper[4853]: E0127 18:58:41.236576 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="extract-utilities" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.236583 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="extract-utilities" Jan 27 18:58:41 crc kubenswrapper[4853]: E0127 18:58:41.236597 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="extract-content" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.236603 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="extract-content" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.236923 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aae743e-dcc5-404a-b0db-86933964d549" containerName="registry-server" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.237489 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.242249 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.242758 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.243239 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.243700 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pfpph"] Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330153 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x47\" (UniqueName: \"kubernetes.io/projected/119564cc-719b-4691-91d5-672513ed9acf-kube-api-access-g4x47\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330211 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-scripts\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330231 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-dispersionconf\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330261 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-ring-data-devices\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330296 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/119564cc-719b-4691-91d5-672513ed9acf-etc-swift\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330321 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-swiftconf\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.330413 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-combined-ca-bundle\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: E0127 18:58:41.330616 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:58:41 crc kubenswrapper[4853]: E0127 18:58:41.330630 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:58:41 crc kubenswrapper[4853]: E0127 18:58:41.330672 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift podName:b1859766-1c8c-471c-bae5-4ae46086e8a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:58:42.330658339 +0000 UTC m=+964.793201222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift") pod "swift-storage-0" (UID: "b1859766-1c8c-471c-bae5-4ae46086e8a5") : configmap "swift-ring-files" not found Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.399674 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.403875 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432064 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x47\" (UniqueName: \"kubernetes.io/projected/119564cc-719b-4691-91d5-672513ed9acf-kube-api-access-g4x47\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432133 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-scripts\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432164 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-dispersionconf\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-ring-data-devices\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432227 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/119564cc-719b-4691-91d5-672513ed9acf-etc-swift\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432362 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-swiftconf\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.432470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-combined-ca-bundle\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.433392 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-scripts\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.433481 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-ring-data-devices\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.433735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/119564cc-719b-4691-91d5-672513ed9acf-etc-swift\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.438896 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-dispersionconf\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.439272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-swiftconf\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.439233 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-combined-ca-bundle\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.455763 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x47\" (UniqueName: \"kubernetes.io/projected/119564cc-719b-4691-91d5-672513ed9acf-kube-api-access-g4x47\") pod \"swift-ring-rebalance-pfpph\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.535437 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.535860 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fssv9\" (UniqueName: \"kubernetes.io/projected/0b26c871-8544-4e5c-b341-95ebab807234-kube-api-access-fssv9\") pod \"0b26c871-8544-4e5c-b341-95ebab807234\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.536069 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-catalog-content\") pod \"0b26c871-8544-4e5c-b341-95ebab807234\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.537054 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-utilities\") pod \"0b26c871-8544-4e5c-b341-95ebab807234\" (UID: \"0b26c871-8544-4e5c-b341-95ebab807234\") " Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.538888 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-utilities" (OuterVolumeSpecName: "utilities") pod "0b26c871-8544-4e5c-b341-95ebab807234" (UID: "0b26c871-8544-4e5c-b341-95ebab807234"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.539095 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.540792 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b26c871-8544-4e5c-b341-95ebab807234-kube-api-access-fssv9" (OuterVolumeSpecName: "kube-api-access-fssv9") pod "0b26c871-8544-4e5c-b341-95ebab807234" (UID: "0b26c871-8544-4e5c-b341-95ebab807234"). InnerVolumeSpecName "kube-api-access-fssv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.563447 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.630502 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b26c871-8544-4e5c-b341-95ebab807234" (UID: "0b26c871-8544-4e5c-b341-95ebab807234"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.640301 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b26c871-8544-4e5c-b341-95ebab807234-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.640495 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fssv9\" (UniqueName: \"kubernetes.io/projected/0b26c871-8544-4e5c-b341-95ebab807234-kube-api-access-fssv9\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.899057 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5s7fk"] Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.911641 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j77j" event={"ID":"7aae743e-dcc5-404a-b0db-86933964d549","Type":"ContainerDied","Data":"246a74a8796a5295e40c7c6a31e566d8eede9122a755a9c1755446e2948bdad6"} Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.911702 4853 scope.go:117] "RemoveContainer" containerID="8f4efe9c86ef20478baa94371a9cc4a0b682b57afe56cf502843af135d441897" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.911822 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j77j" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.917287 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42tzg" Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.917419 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42tzg" event={"ID":"0b26c871-8544-4e5c-b341-95ebab807234","Type":"ContainerDied","Data":"888b8e51f6951f8dd6be6f2cfacf0814224d7a01b0853d686fcf17da9c681954"} Jan 27 18:58:41 crc kubenswrapper[4853]: I0127 18:58:41.925942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"a0719d2d74e31dba5f0b13e64100839f15049069456c6041563b2a237f331790"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.040176 4853 scope.go:117] "RemoveContainer" containerID="a91b981619f0dec1014ed4a1acca62c59360d3657e9447cb0fa549a4475e08e0" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.062871 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42tzg"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.069329 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-42tzg"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.078654 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7j77j"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.084779 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7j77j"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.122623 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b26c871-8544-4e5c-b341-95ebab807234" path="/var/lib/kubelet/pods/0b26c871-8544-4e5c-b341-95ebab807234/volumes" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.123605 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aae743e-dcc5-404a-b0db-86933964d549" path="/var/lib/kubelet/pods/7aae743e-dcc5-404a-b0db-86933964d549/volumes" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.195919 4853 scope.go:117] "RemoveContainer" containerID="df794dc047ffb2659d84137987e91de314294b1ca639da8755d68c350150c0eb" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.235350 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pfpph"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.360696 4853 scope.go:117] "RemoveContainer" containerID="79e379dee8fe8dd6cf75e7b13c778671d55b8576f9b25c06ae3385de52c893ac" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.361178 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:42 crc kubenswrapper[4853]: E0127 18:58:42.361365 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:58:42 crc kubenswrapper[4853]: E0127 18:58:42.361384 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:58:42 crc kubenswrapper[4853]: E0127 18:58:42.361426 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift podName:b1859766-1c8c-471c-bae5-4ae46086e8a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:58:44.361411639 +0000 UTC m=+966.823954522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift") pod "swift-storage-0" (UID: "b1859766-1c8c-471c-bae5-4ae46086e8a5") : configmap "swift-ring-files" not found Jan 27 18:58:42 crc kubenswrapper[4853]: W0127 18:58:42.367562 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119564cc_719b_4691_91d5_672513ed9acf.slice/crio-3c268dcbf64f8d579dfcdd8b0f80e4545ecc8fa68833d64aa21709a2b7b522c3 WatchSource:0}: Error finding container 3c268dcbf64f8d579dfcdd8b0f80e4545ecc8fa68833d64aa21709a2b7b522c3: Status 404 returned error can't find the container with id 3c268dcbf64f8d579dfcdd8b0f80e4545ecc8fa68833d64aa21709a2b7b522c3 Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.390760 4853 scope.go:117] "RemoveContainer" containerID="cfa825982ed017dc7cca517fdca19495d1fe991c89c310a4fc9d6c10784bd758" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.414646 4853 scope.go:117] "RemoveContainer" containerID="ba05453d834bb2d6f7e063e52a6204df3beaa4787b68a1c79b9614cce6037e59" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.628786 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-chjgp"] Jan 27 18:58:42 crc kubenswrapper[4853]: E0127 18:58:42.629445 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="extract-content" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.629466 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="extract-content" Jan 27 18:58:42 crc kubenswrapper[4853]: E0127 18:58:42.629496 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="registry-server" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.629505 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="registry-server" Jan 27 18:58:42 crc kubenswrapper[4853]: E0127 18:58:42.629533 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="extract-utilities" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.629544 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="extract-utilities" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.629752 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b26c871-8544-4e5c-b341-95ebab807234" containerName="registry-server" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.630324 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.656190 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c6b2-account-create-update-56lhn"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.658492 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.660717 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.681989 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c6b2-account-create-update-56lhn"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.694688 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-chjgp"] Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.768167 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tn7\" (UniqueName: \"kubernetes.io/projected/6e2ce04c-5d13-464f-9018-29c34c1b5d35-kube-api-access-r4tn7\") pod \"glance-c6b2-account-create-update-56lhn\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.768218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd47d73-75f6-4d7b-92be-e6efc1a44297-operator-scripts\") pod \"glance-db-create-chjgp\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.768256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e2ce04c-5d13-464f-9018-29c34c1b5d35-operator-scripts\") pod \"glance-c6b2-account-create-update-56lhn\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.768345 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrm9c\" (UniqueName: \"kubernetes.io/projected/3bd47d73-75f6-4d7b-92be-e6efc1a44297-kube-api-access-zrm9c\") pod \"glance-db-create-chjgp\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.869881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tn7\" (UniqueName: \"kubernetes.io/projected/6e2ce04c-5d13-464f-9018-29c34c1b5d35-kube-api-access-r4tn7\") pod \"glance-c6b2-account-create-update-56lhn\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.869948 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd47d73-75f6-4d7b-92be-e6efc1a44297-operator-scripts\") pod \"glance-db-create-chjgp\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.869993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e2ce04c-5d13-464f-9018-29c34c1b5d35-operator-scripts\") pod \"glance-c6b2-account-create-update-56lhn\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.870031 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrm9c\" (UniqueName: \"kubernetes.io/projected/3bd47d73-75f6-4d7b-92be-e6efc1a44297-kube-api-access-zrm9c\") pod \"glance-db-create-chjgp\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.871245 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd47d73-75f6-4d7b-92be-e6efc1a44297-operator-scripts\") pod \"glance-db-create-chjgp\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.871308 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e2ce04c-5d13-464f-9018-29c34c1b5d35-operator-scripts\") pod \"glance-c6b2-account-create-update-56lhn\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.891822 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrm9c\" (UniqueName: \"kubernetes.io/projected/3bd47d73-75f6-4d7b-92be-e6efc1a44297-kube-api-access-zrm9c\") pod \"glance-db-create-chjgp\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.891877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tn7\" (UniqueName: \"kubernetes.io/projected/6e2ce04c-5d13-464f-9018-29c34c1b5d35-kube-api-access-r4tn7\") pod \"glance-c6b2-account-create-update-56lhn\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.934320 4853 generic.go:334] "Generic (PLEG): container finished" podID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerID="ccfd0d8370f201b0ee1aac81d3faff1fdea161c380ab5e38f1d132b0eb44915f" exitCode=0 Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.934377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" event={"ID":"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89","Type":"ContainerDied","Data":"ccfd0d8370f201b0ee1aac81d3faff1fdea161c380ab5e38f1d132b0eb44915f"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.934462 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" event={"ID":"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89","Type":"ContainerStarted","Data":"9f25af11a73c8e8f742b5b4f72bf65d99b7fe9d71287d006e5a02a4629ac9773"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.938294 4853 generic.go:334] "Generic (PLEG): container finished" podID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" containerID="e84aa75b0a294d040d8b00b6a1e48960291f3807ce7d97a258c157fa5c4ce8f6" exitCode=0 Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.938383 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" event={"ID":"324dd0b6-9b7c-4b19-a069-346afc03f8cc","Type":"ContainerDied","Data":"e84aa75b0a294d040d8b00b6a1e48960291f3807ce7d97a258c157fa5c4ce8f6"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.944155 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"525d82bf-e147-429f-8915-365aa48be00b","Type":"ContainerStarted","Data":"c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.970893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfpph" event={"ID":"119564cc-719b-4691-91d5-672513ed9acf","Type":"ContainerStarted","Data":"3c268dcbf64f8d579dfcdd8b0f80e4545ecc8fa68833d64aa21709a2b7b522c3"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.974234 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c1d29cf4-2fdf-46ef-8470-e42a8226dd7c","Type":"ContainerStarted","Data":"ca7108ca0d746a513caf2fc1cc7c692b776e0ed34105975dc0b44d3f07aa16b1"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.979899 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-chjgp" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.989453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.990304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"368a8f46-825c-43ad-803b-c7fdf6ca048c","Type":"ContainerStarted","Data":"b72b2d9ce2839d12f09134c41207bdc7f4d04f57d501e26aeec7ec0a304205f7"} Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.996614 4853 generic.go:334] "Generic (PLEG): container finished" podID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerID="9f57f2eb0b60e29cd18a88f21c11dbdafd79b324b6aa786869d899dc38baff84" exitCode=0 Jan 27 18:58:42 crc kubenswrapper[4853]: I0127 18:58:42.996762 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" event={"ID":"5318f74c-0368-48c1-be29-dbb63a36ba18","Type":"ContainerDied","Data":"9f57f2eb0b60e29cd18a88f21c11dbdafd79b324b6aa786869d899dc38baff84"} Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.040414 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.698163304 podStartE2EDuration="39.040393297s" podCreationTimestamp="2026-01-27 18:58:04 +0000 UTC" firstStartedPulling="2026-01-27 18:58:25.42739298 +0000 UTC m=+947.889935863" lastFinishedPulling="2026-01-27 18:58:41.769622973 +0000 UTC m=+964.232165856" observedRunningTime="2026-01-27 18:58:43.037496455 +0000 UTC m=+965.500039348" watchObservedRunningTime="2026-01-27 18:58:43.040393297 +0000 UTC m=+965.502936180" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.060055 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.615949613 podStartE2EDuration="38.060036824s" podCreationTimestamp="2026-01-27 18:58:05 +0000 UTC" firstStartedPulling="2026-01-27 18:58:25.36497367 +0000 UTC m=+947.827516553" lastFinishedPulling="2026-01-27 18:58:41.809060891 +0000 UTC m=+964.271603764" observedRunningTime="2026-01-27 18:58:43.057525793 +0000 UTC m=+965.520068676" watchObservedRunningTime="2026-01-27 18:58:43.060036824 +0000 UTC m=+965.522579707" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.287629 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.388995 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-config\") pod \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.389080 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr7wz\" (UniqueName: \"kubernetes.io/projected/324dd0b6-9b7c-4b19-a069-346afc03f8cc-kube-api-access-vr7wz\") pod \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.389218 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-dns-svc\") pod \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\" (UID: \"324dd0b6-9b7c-4b19-a069-346afc03f8cc\") " Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.408835 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324dd0b6-9b7c-4b19-a069-346afc03f8cc-kube-api-access-vr7wz" (OuterVolumeSpecName: "kube-api-access-vr7wz") pod "324dd0b6-9b7c-4b19-a069-346afc03f8cc" (UID: "324dd0b6-9b7c-4b19-a069-346afc03f8cc"). InnerVolumeSpecName "kube-api-access-vr7wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.416988 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-config" (OuterVolumeSpecName: "config") pod "324dd0b6-9b7c-4b19-a069-346afc03f8cc" (UID: "324dd0b6-9b7c-4b19-a069-346afc03f8cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.435324 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "324dd0b6-9b7c-4b19-a069-346afc03f8cc" (UID: "324dd0b6-9b7c-4b19-a069-346afc03f8cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.491355 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.491401 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324dd0b6-9b7c-4b19-a069-346afc03f8cc-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.491415 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr7wz\" (UniqueName: \"kubernetes.io/projected/324dd0b6-9b7c-4b19-a069-346afc03f8cc-kube-api-access-vr7wz\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.642036 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.719502 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c6b2-account-create-update-56lhn"] Jan 27 18:58:43 crc kubenswrapper[4853]: I0127 18:58:43.822581 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-chjgp"] Jan 27 18:58:43 crc kubenswrapper[4853]: W0127 18:58:43.830257 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd47d73_75f6_4d7b_92be_e6efc1a44297.slice/crio-7b6e207dbbe3ffd6031715ce930ecdc2a84f325eb0c3b3b90dbf6b33e7d0a2b6 WatchSource:0}: Error finding container 7b6e207dbbe3ffd6031715ce930ecdc2a84f325eb0c3b3b90dbf6b33e7d0a2b6: Status 404 returned error can't find the container with id 7b6e207dbbe3ffd6031715ce930ecdc2a84f325eb0c3b3b90dbf6b33e7d0a2b6 Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.009926 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" event={"ID":"324dd0b6-9b7c-4b19-a069-346afc03f8cc","Type":"ContainerDied","Data":"095e5c3af8bba63f6d45bf70f05b3de078e7dfee3b326a073aee811f3049d2c5"} Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.009984 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qhc9l" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.010299 4853 scope.go:117] "RemoveContainer" containerID="e84aa75b0a294d040d8b00b6a1e48960291f3807ce7d97a258c157fa5c4ce8f6" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.012675 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c6b2-account-create-update-56lhn" event={"ID":"6e2ce04c-5d13-464f-9018-29c34c1b5d35","Type":"ContainerStarted","Data":"d8f4b99d43a1c4307efc38e93556f351c97343e0d7d996e6e801ac943bc4d921"} Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.012719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c6b2-account-create-update-56lhn" event={"ID":"6e2ce04c-5d13-464f-9018-29c34c1b5d35","Type":"ContainerStarted","Data":"ee998744e3148ab056f8909ad80e2029bb2b8151be6ce8a9489fba9428a6e3eb"} Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.017266 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-chjgp" event={"ID":"3bd47d73-75f6-4d7b-92be-e6efc1a44297","Type":"ContainerStarted","Data":"7b6e207dbbe3ffd6031715ce930ecdc2a84f325eb0c3b3b90dbf6b33e7d0a2b6"} Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.022495 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" event={"ID":"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89","Type":"ContainerStarted","Data":"8d61276950143c86cbb95cf5d6c72c4f16799951a340ec49400797887b8f50f3"} Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.023467 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.028193 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" event={"ID":"5318f74c-0368-48c1-be29-dbb63a36ba18","Type":"ContainerStarted","Data":"52d2b2319cb6c03ab2daa5920966a7b81b666b8980f03012c2bc07dc72929939"} Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.029490 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.055416 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c6b2-account-create-update-56lhn" podStartSLOduration=2.055394451 podStartE2EDuration="2.055394451s" podCreationTimestamp="2026-01-27 18:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:44.03806906 +0000 UTC m=+966.500611943" watchObservedRunningTime="2026-01-27 18:58:44.055394451 +0000 UTC m=+966.517937334" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.057781 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" podStartSLOduration=3.728271625 podStartE2EDuration="52.057773098s" podCreationTimestamp="2026-01-27 18:57:52 +0000 UTC" firstStartedPulling="2026-01-27 18:57:53.44295776 +0000 UTC m=+915.905500643" lastFinishedPulling="2026-01-27 18:58:41.772459233 +0000 UTC m=+964.235002116" observedRunningTime="2026-01-27 18:58:44.054941938 +0000 UTC m=+966.517484821" watchObservedRunningTime="2026-01-27 18:58:44.057773098 +0000 UTC m=+966.520315981" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.077973 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" podStartSLOduration=5.077954561 podStartE2EDuration="5.077954561s" podCreationTimestamp="2026-01-27 18:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:44.07016557 +0000 UTC m=+966.532708453" watchObservedRunningTime="2026-01-27 18:58:44.077954561 +0000 UTC m=+966.540497444" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.108774 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qhc9l"] Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.115800 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qhc9l"] Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.166974 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" path="/var/lib/kubelet/pods/324dd0b6-9b7c-4b19-a069-346afc03f8cc/volumes" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.223679 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gs4p8"] Jan 27 18:58:44 crc kubenswrapper[4853]: E0127 18:58:44.223999 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" containerName="init" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.224012 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" containerName="init" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.224190 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="324dd0b6-9b7c-4b19-a069-346afc03f8cc" containerName="init" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.224685 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.226600 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.239296 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gs4p8"] Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.336404 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-operator-scripts\") pod \"root-account-create-update-gs4p8\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.336585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp84b\" (UniqueName: \"kubernetes.io/projected/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-kube-api-access-cp84b\") pod \"root-account-create-update-gs4p8\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.438475 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp84b\" (UniqueName: \"kubernetes.io/projected/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-kube-api-access-cp84b\") pod \"root-account-create-update-gs4p8\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.438628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-operator-scripts\") pod \"root-account-create-update-gs4p8\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.438704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:44 crc kubenswrapper[4853]: E0127 18:58:44.438857 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:58:44 crc kubenswrapper[4853]: E0127 18:58:44.438878 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:58:44 crc kubenswrapper[4853]: E0127 18:58:44.438931 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift podName:b1859766-1c8c-471c-bae5-4ae46086e8a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:58:48.438916243 +0000 UTC m=+970.901459126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift") pod "swift-storage-0" (UID: "b1859766-1c8c-471c-bae5-4ae46086e8a5") : configmap "swift-ring-files" not found Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.439508 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-operator-scripts\") pod \"root-account-create-update-gs4p8\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.480974 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp84b\" (UniqueName: \"kubernetes.io/projected/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-kube-api-access-cp84b\") pod \"root-account-create-update-gs4p8\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.543026 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.565094 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:44 crc kubenswrapper[4853]: I0127 18:58:44.583622 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.035817 4853 generic.go:334] "Generic (PLEG): container finished" podID="3bd47d73-75f6-4d7b-92be-e6efc1a44297" containerID="3d97c0b027163cf31d180c7276e19f5ec785eed80a0b1c800115600da12685f8" exitCode=0 Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.035888 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-chjgp" event={"ID":"3bd47d73-75f6-4d7b-92be-e6efc1a44297","Type":"ContainerDied","Data":"3d97c0b027163cf31d180c7276e19f5ec785eed80a0b1c800115600da12685f8"} Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.038016 4853 generic.go:334] "Generic (PLEG): container finished" podID="6e2ce04c-5d13-464f-9018-29c34c1b5d35" containerID="d8f4b99d43a1c4307efc38e93556f351c97343e0d7d996e6e801ac943bc4d921" exitCode=0 Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.038246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c6b2-account-create-update-56lhn" event={"ID":"6e2ce04c-5d13-464f-9018-29c34c1b5d35","Type":"ContainerDied","Data":"d8f4b99d43a1c4307efc38e93556f351c97343e0d7d996e6e801ac943bc4d921"} Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.038785 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.084955 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.362427 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-z9c58"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.394700 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-hp9jm"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.396595 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.404585 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.406569 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-hp9jm"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.458219 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-99cll"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.459848 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.463069 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.468707 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-99cll"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.505473 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.568466 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571518 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d689c7-0b2e-46b3-95f7-5c43aafac340-combined-ca-bundle\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571663 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571712 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwmz\" (UniqueName: \"kubernetes.io/projected/98d689c7-0b2e-46b3-95f7-5c43aafac340-kube-api-access-2jwmz\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571736 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-config\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571789 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/98d689c7-0b2e-46b3-95f7-5c43aafac340-ovn-rundir\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571807 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571857 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d689c7-0b2e-46b3-95f7-5c43aafac340-config\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571937 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/98d689c7-0b2e-46b3-95f7-5c43aafac340-ovs-rundir\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.571967 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d689c7-0b2e-46b3-95f7-5c43aafac340-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.572201 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqq9\" (UniqueName: \"kubernetes.io/projected/e8e56ba6-8066-4618-9792-23f022be8786-kube-api-access-7vqq9\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673376 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwmz\" (UniqueName: \"kubernetes.io/projected/98d689c7-0b2e-46b3-95f7-5c43aafac340-kube-api-access-2jwmz\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673425 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-config\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673463 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/98d689c7-0b2e-46b3-95f7-5c43aafac340-ovn-rundir\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673524 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d689c7-0b2e-46b3-95f7-5c43aafac340-config\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673595 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/98d689c7-0b2e-46b3-95f7-5c43aafac340-ovs-rundir\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d689c7-0b2e-46b3-95f7-5c43aafac340-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqq9\" (UniqueName: \"kubernetes.io/projected/e8e56ba6-8066-4618-9792-23f022be8786-kube-api-access-7vqq9\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673745 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d689c7-0b2e-46b3-95f7-5c43aafac340-combined-ca-bundle\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.673770 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.674574 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.674804 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/98d689c7-0b2e-46b3-95f7-5c43aafac340-ovs-rundir\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.675406 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/98d689c7-0b2e-46b3-95f7-5c43aafac340-ovn-rundir\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.675675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-config\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.676194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.676739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d689c7-0b2e-46b3-95f7-5c43aafac340-config\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.680357 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d689c7-0b2e-46b3-95f7-5c43aafac340-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.689538 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d689c7-0b2e-46b3-95f7-5c43aafac340-combined-ca-bundle\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.694410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwmz\" (UniqueName: \"kubernetes.io/projected/98d689c7-0b2e-46b3-95f7-5c43aafac340-kube-api-access-2jwmz\") pod \"ovn-controller-metrics-99cll\" (UID: \"98d689c7-0b2e-46b3-95f7-5c43aafac340\") " pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.701019 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqq9\" (UniqueName: \"kubernetes.io/projected/e8e56ba6-8066-4618-9792-23f022be8786-kube-api-access-7vqq9\") pod \"dnsmasq-dns-6c89d5d749-hp9jm\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.727147 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5s7fk"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.742947 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.767679 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-6t7wq"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.769357 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.771713 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.782433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-99cll" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.788802 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6t7wq"] Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.878042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-config\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.878235 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rkx\" (UniqueName: \"kubernetes.io/projected/3ba5b341-34ed-484d-ae1d-fe08f998eac4-kube-api-access-s4rkx\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.878280 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.878302 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.878811 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-dns-svc\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.980476 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-dns-svc\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.980548 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-config\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.980605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rkx\" (UniqueName: \"kubernetes.io/projected/3ba5b341-34ed-484d-ae1d-fe08f998eac4-kube-api-access-s4rkx\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.980634 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.980652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.981522 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.981520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-dns-svc\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.981588 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:45 crc kubenswrapper[4853]: I0127 18:58:45.982083 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-config\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.002303 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rkx\" (UniqueName: \"kubernetes.io/projected/3ba5b341-34ed-484d-ae1d-fe08f998eac4-kube-api-access-s4rkx\") pod \"dnsmasq-dns-698758b865-6t7wq\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.044909 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.045381 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerName="dnsmasq-dns" containerID="cri-o://52d2b2319cb6c03ab2daa5920966a7b81b666b8980f03012c2bc07dc72929939" gracePeriod=10 Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.085343 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.091572 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.475403 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.492460 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.492616 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.495730 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.496299 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.496635 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.496732 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6zph7" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.601490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.601729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.601881 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.601989 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz72c\" (UniqueName: \"kubernetes.io/projected/24f3c135-8664-4bbd-87bf-dd93c3595195-kube-api-access-lz72c\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.602063 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24f3c135-8664-4bbd-87bf-dd93c3595195-scripts\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.602200 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24f3c135-8664-4bbd-87bf-dd93c3595195-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.602317 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f3c135-8664-4bbd-87bf-dd93c3595195-config\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.704210 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.705003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz72c\" (UniqueName: \"kubernetes.io/projected/24f3c135-8664-4bbd-87bf-dd93c3595195-kube-api-access-lz72c\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.705794 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24f3c135-8664-4bbd-87bf-dd93c3595195-scripts\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.707164 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24f3c135-8664-4bbd-87bf-dd93c3595195-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.708865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24f3c135-8664-4bbd-87bf-dd93c3595195-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.709026 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f3c135-8664-4bbd-87bf-dd93c3595195-config\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.709239 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24f3c135-8664-4bbd-87bf-dd93c3595195-scripts\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.709254 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.709546 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.710023 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f3c135-8664-4bbd-87bf-dd93c3595195-config\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.710687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.716009 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.716050 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3c135-8664-4bbd-87bf-dd93c3595195-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.722999 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz72c\" (UniqueName: \"kubernetes.io/projected/24f3c135-8664-4bbd-87bf-dd93c3595195-kube-api-access-lz72c\") pod \"ovn-northd-0\" (UID: \"24f3c135-8664-4bbd-87bf-dd93c3595195\") " pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.819161 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.972059 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rcgrf"] Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.973111 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:46 crc kubenswrapper[4853]: I0127 18:58:46.982716 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rcgrf"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.061528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c6b2-account-create-update-56lhn" event={"ID":"6e2ce04c-5d13-464f-9018-29c34c1b5d35","Type":"ContainerDied","Data":"ee998744e3148ab056f8909ad80e2029bb2b8151be6ce8a9489fba9428a6e3eb"} Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.061574 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee998744e3148ab056f8909ad80e2029bb2b8151be6ce8a9489fba9428a6e3eb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.064621 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-chjgp" event={"ID":"3bd47d73-75f6-4d7b-92be-e6efc1a44297","Type":"ContainerDied","Data":"7b6e207dbbe3ffd6031715ce930ecdc2a84f325eb0c3b3b90dbf6b33e7d0a2b6"} Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.064662 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b6e207dbbe3ffd6031715ce930ecdc2a84f325eb0c3b3b90dbf6b33e7d0a2b6" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.074211 4853 generic.go:334] "Generic (PLEG): container finished" podID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerID="52d2b2319cb6c03ab2daa5920966a7b81b666b8980f03012c2bc07dc72929939" exitCode=0 Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.074502 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerName="dnsmasq-dns" containerID="cri-o://8d61276950143c86cbb95cf5d6c72c4f16799951a340ec49400797887b8f50f3" gracePeriod=10 Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.074840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" event={"ID":"5318f74c-0368-48c1-be29-dbb63a36ba18","Type":"ContainerDied","Data":"52d2b2319cb6c03ab2daa5920966a7b81b666b8980f03012c2bc07dc72929939"} Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.093112 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4881-account-create-update-vqvqb"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.094488 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.097681 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.109559 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4881-account-create-update-vqvqb"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.117483 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6pr\" (UniqueName: \"kubernetes.io/projected/a9358d10-5bdb-4f99-96d1-907990452ad6-kube-api-access-9z6pr\") pod \"keystone-db-create-rcgrf\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.117553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9358d10-5bdb-4f99-96d1-907990452ad6-operator-scripts\") pod \"keystone-db-create-rcgrf\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.217501 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.219375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6pr\" (UniqueName: \"kubernetes.io/projected/a9358d10-5bdb-4f99-96d1-907990452ad6-kube-api-access-9z6pr\") pod \"keystone-db-create-rcgrf\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.219427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb347c8c-cafd-4b44-9862-d69103d33fb7-operator-scripts\") pod \"keystone-4881-account-create-update-vqvqb\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.219463 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9358d10-5bdb-4f99-96d1-907990452ad6-operator-scripts\") pod \"keystone-db-create-rcgrf\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.219590 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rmr\" (UniqueName: \"kubernetes.io/projected/fb347c8c-cafd-4b44-9862-d69103d33fb7-kube-api-access-t4rmr\") pod \"keystone-4881-account-create-update-vqvqb\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.224271 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9358d10-5bdb-4f99-96d1-907990452ad6-operator-scripts\") pod \"keystone-db-create-rcgrf\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.246872 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6pr\" (UniqueName: \"kubernetes.io/projected/a9358d10-5bdb-4f99-96d1-907990452ad6-kube-api-access-9z6pr\") pod \"keystone-db-create-rcgrf\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.247249 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-chjgp" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.293918 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.322313 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4tn7\" (UniqueName: \"kubernetes.io/projected/6e2ce04c-5d13-464f-9018-29c34c1b5d35-kube-api-access-r4tn7\") pod \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.322420 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd47d73-75f6-4d7b-92be-e6efc1a44297-operator-scripts\") pod \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.322463 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e2ce04c-5d13-464f-9018-29c34c1b5d35-operator-scripts\") pod \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\" (UID: \"6e2ce04c-5d13-464f-9018-29c34c1b5d35\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.322489 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrm9c\" (UniqueName: \"kubernetes.io/projected/3bd47d73-75f6-4d7b-92be-e6efc1a44297-kube-api-access-zrm9c\") pod \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\" (UID: \"3bd47d73-75f6-4d7b-92be-e6efc1a44297\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.322818 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4rmr\" (UniqueName: \"kubernetes.io/projected/fb347c8c-cafd-4b44-9862-d69103d33fb7-kube-api-access-t4rmr\") pod \"keystone-4881-account-create-update-vqvqb\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.322954 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb347c8c-cafd-4b44-9862-d69103d33fb7-operator-scripts\") pod \"keystone-4881-account-create-update-vqvqb\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.323781 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb347c8c-cafd-4b44-9862-d69103d33fb7-operator-scripts\") pod \"keystone-4881-account-create-update-vqvqb\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.324378 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e2ce04c-5d13-464f-9018-29c34c1b5d35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e2ce04c-5d13-464f-9018-29c34c1b5d35" (UID: "6e2ce04c-5d13-464f-9018-29c34c1b5d35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.325156 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd47d73-75f6-4d7b-92be-e6efc1a44297-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd47d73-75f6-4d7b-92be-e6efc1a44297" (UID: "3bd47d73-75f6-4d7b-92be-e6efc1a44297"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.333102 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2ce04c-5d13-464f-9018-29c34c1b5d35-kube-api-access-r4tn7" (OuterVolumeSpecName: "kube-api-access-r4tn7") pod "6e2ce04c-5d13-464f-9018-29c34c1b5d35" (UID: "6e2ce04c-5d13-464f-9018-29c34c1b5d35"). InnerVolumeSpecName "kube-api-access-r4tn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.345410 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd47d73-75f6-4d7b-92be-e6efc1a44297-kube-api-access-zrm9c" (OuterVolumeSpecName: "kube-api-access-zrm9c") pod "3bd47d73-75f6-4d7b-92be-e6efc1a44297" (UID: "3bd47d73-75f6-4d7b-92be-e6efc1a44297"). InnerVolumeSpecName "kube-api-access-zrm9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.353843 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4rmr\" (UniqueName: \"kubernetes.io/projected/fb347c8c-cafd-4b44-9862-d69103d33fb7-kube-api-access-t4rmr\") pod \"keystone-4881-account-create-update-vqvqb\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.399229 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-t8t9z"] Jan 27 18:58:47 crc kubenswrapper[4853]: E0127 18:58:47.399673 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd47d73-75f6-4d7b-92be-e6efc1a44297" containerName="mariadb-database-create" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.399693 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd47d73-75f6-4d7b-92be-e6efc1a44297" containerName="mariadb-database-create" Jan 27 18:58:47 crc kubenswrapper[4853]: E0127 18:58:47.399709 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2ce04c-5d13-464f-9018-29c34c1b5d35" containerName="mariadb-account-create-update" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.399718 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2ce04c-5d13-464f-9018-29c34c1b5d35" containerName="mariadb-account-create-update" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.399948 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2ce04c-5d13-464f-9018-29c34c1b5d35" containerName="mariadb-account-create-update" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.399970 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd47d73-75f6-4d7b-92be-e6efc1a44297" containerName="mariadb-database-create" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.400659 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.411223 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d00f-account-create-update-rrkdb"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.412978 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.416565 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.424347 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-t8t9z"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.424928 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4tn7\" (UniqueName: \"kubernetes.io/projected/6e2ce04c-5d13-464f-9018-29c34c1b5d35-kube-api-access-r4tn7\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.424949 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd47d73-75f6-4d7b-92be-e6efc1a44297-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.424960 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e2ce04c-5d13-464f-9018-29c34c1b5d35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.424972 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrm9c\" (UniqueName: \"kubernetes.io/projected/3bd47d73-75f6-4d7b-92be-e6efc1a44297-kube-api-access-zrm9c\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.442619 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d00f-account-create-update-rrkdb"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.494901 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.526454 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwjq\" (UniqueName: \"kubernetes.io/projected/829117d5-2c78-4874-bea6-5d66f13b1f39-kube-api-access-ktwjq\") pod \"placement-d00f-account-create-update-rrkdb\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.526493 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjn5\" (UniqueName: \"kubernetes.io/projected/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-kube-api-access-xvjn5\") pod \"placement-db-create-t8t9z\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.526539 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-operator-scripts\") pod \"placement-db-create-t8t9z\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.526654 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829117d5-2c78-4874-bea6-5d66f13b1f39-operator-scripts\") pod \"placement-d00f-account-create-update-rrkdb\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.547657 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h84g7"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.547920 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h84g7" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="registry-server" containerID="cri-o://e4cdc1f572511f1fcc43e48c70199665df037cd2d27b84cddded5e5f4290ab5b" gracePeriod=2 Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.566601 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.627819 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-config\") pod \"5318f74c-0368-48c1-be29-dbb63a36ba18\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.628233 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrfs8\" (UniqueName: \"kubernetes.io/projected/5318f74c-0368-48c1-be29-dbb63a36ba18-kube-api-access-rrfs8\") pod \"5318f74c-0368-48c1-be29-dbb63a36ba18\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.628400 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-dns-svc\") pod \"5318f74c-0368-48c1-be29-dbb63a36ba18\" (UID: \"5318f74c-0368-48c1-be29-dbb63a36ba18\") " Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.628724 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829117d5-2c78-4874-bea6-5d66f13b1f39-operator-scripts\") pod \"placement-d00f-account-create-update-rrkdb\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.628811 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktwjq\" (UniqueName: \"kubernetes.io/projected/829117d5-2c78-4874-bea6-5d66f13b1f39-kube-api-access-ktwjq\") pod \"placement-d00f-account-create-update-rrkdb\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.628835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvjn5\" (UniqueName: \"kubernetes.io/projected/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-kube-api-access-xvjn5\") pod \"placement-db-create-t8t9z\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.628875 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-operator-scripts\") pod \"placement-db-create-t8t9z\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.630070 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-operator-scripts\") pod \"placement-db-create-t8t9z\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.635311 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829117d5-2c78-4874-bea6-5d66f13b1f39-operator-scripts\") pod \"placement-d00f-account-create-update-rrkdb\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.647005 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5318f74c-0368-48c1-be29-dbb63a36ba18-kube-api-access-rrfs8" (OuterVolumeSpecName: "kube-api-access-rrfs8") pod "5318f74c-0368-48c1-be29-dbb63a36ba18" (UID: "5318f74c-0368-48c1-be29-dbb63a36ba18"). InnerVolumeSpecName "kube-api-access-rrfs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.661109 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktwjq\" (UniqueName: \"kubernetes.io/projected/829117d5-2c78-4874-bea6-5d66f13b1f39-kube-api-access-ktwjq\") pod \"placement-d00f-account-create-update-rrkdb\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.661758 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvjn5\" (UniqueName: \"kubernetes.io/projected/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-kube-api-access-xvjn5\") pod \"placement-db-create-t8t9z\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.688308 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-config" (OuterVolumeSpecName: "config") pod "5318f74c-0368-48c1-be29-dbb63a36ba18" (UID: "5318f74c-0368-48c1-be29-dbb63a36ba18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.696661 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5318f74c-0368-48c1-be29-dbb63a36ba18" (UID: "5318f74c-0368-48c1-be29-dbb63a36ba18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.730506 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.730537 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5318f74c-0368-48c1-be29-dbb63a36ba18-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.730548 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrfs8\" (UniqueName: \"kubernetes.io/projected/5318f74c-0368-48c1-be29-dbb63a36ba18-kube-api-access-rrfs8\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.833884 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.861928 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.898450 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-99cll"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.921353 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6t7wq"] Jan 27 18:58:47 crc kubenswrapper[4853]: W0127 18:58:47.925443 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98d689c7_0b2e_46b3_95f7_5c43aafac340.slice/crio-6ded5c471bfde386dd0a10982cb089d89e2075c5fba9802c604bd7bd0136e5c2 WatchSource:0}: Error finding container 6ded5c471bfde386dd0a10982cb089d89e2075c5fba9802c604bd7bd0136e5c2: Status 404 returned error can't find the container with id 6ded5c471bfde386dd0a10982cb089d89e2075c5fba9802c604bd7bd0136e5c2 Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.929443 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:58:47 crc kubenswrapper[4853]: I0127 18:58:47.935437 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-hp9jm"] Jan 27 18:58:47 crc kubenswrapper[4853]: W0127 18:58:47.938477 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ba5b341_34ed_484d_ae1d_fe08f998eac4.slice/crio-5cf7d85bedbb4624075dac8335097026e432efa55af408f280cc412774e8bfdd WatchSource:0}: Error finding container 5cf7d85bedbb4624075dac8335097026e432efa55af408f280cc412774e8bfdd: Status 404 returned error can't find the container with id 5cf7d85bedbb4624075dac8335097026e432efa55af408f280cc412774e8bfdd Jan 27 18:58:47 crc kubenswrapper[4853]: W0127 18:58:47.944233 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24f3c135_8664_4bbd_87bf_dd93c3595195.slice/crio-c6a36091e5f7ba149cb8d1097bf5c2ece2ad48edb546bd6f17ae986627e75e4e WatchSource:0}: Error finding container c6a36091e5f7ba149cb8d1097bf5c2ece2ad48edb546bd6f17ae986627e75e4e: Status 404 returned error can't find the container with id c6a36091e5f7ba149cb8d1097bf5c2ece2ad48edb546bd6f17ae986627e75e4e Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.095165 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" event={"ID":"5318f74c-0368-48c1-be29-dbb63a36ba18","Type":"ContainerDied","Data":"76253bbb5515da374e92ca2ccb9064f07e816c1ab5f5cef30c03401452db7045"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.099396 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"24f3c135-8664-4bbd-87bf-dd93c3595195","Type":"ContainerStarted","Data":"c6a36091e5f7ba149cb8d1097bf5c2ece2ad48edb546bd6f17ae986627e75e4e"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.099481 4853 scope.go:117] "RemoveContainer" containerID="52d2b2319cb6c03ab2daa5920966a7b81b666b8980f03012c2bc07dc72929939" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.095196 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-z9c58" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.111393 4853 generic.go:334] "Generic (PLEG): container finished" podID="d714f652-b46e-4843-89a9-0503e169cc42" containerID="e4cdc1f572511f1fcc43e48c70199665df037cd2d27b84cddded5e5f4290ab5b" exitCode=0 Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.111636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h84g7" event={"ID":"d714f652-b46e-4843-89a9-0503e169cc42","Type":"ContainerDied","Data":"e4cdc1f572511f1fcc43e48c70199665df037cd2d27b84cddded5e5f4290ab5b"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.145443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" event={"ID":"e8e56ba6-8066-4618-9792-23f022be8786","Type":"ContainerStarted","Data":"7e92dce77616febc64c3ffc95b0535ddfc72990b067c224f7539f9332436c51a"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.145488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfpph" event={"ID":"119564cc-719b-4691-91d5-672513ed9acf","Type":"ContainerStarted","Data":"f90369aa9465f3dc72afec91131ea82e980e63fdb1c9b54e4def008bf01562fa"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.153394 4853 generic.go:334] "Generic (PLEG): container finished" podID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerID="8d61276950143c86cbb95cf5d6c72c4f16799951a340ec49400797887b8f50f3" exitCode=0 Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.153538 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" event={"ID":"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89","Type":"ContainerDied","Data":"8d61276950143c86cbb95cf5d6c72c4f16799951a340ec49400797887b8f50f3"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.168667 4853 scope.go:117] "RemoveContainer" containerID="9f57f2eb0b60e29cd18a88f21c11dbdafd79b324b6aa786869d899dc38baff84" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.175525 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.176894 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6t7wq" event={"ID":"3ba5b341-34ed-484d-ae1d-fe08f998eac4","Type":"ContainerStarted","Data":"5cf7d85bedbb4624075dac8335097026e432efa55af408f280cc412774e8bfdd"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.180482 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c6b2-account-create-update-56lhn" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.181553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-99cll" event={"ID":"98d689c7-0b2e-46b3-95f7-5c43aafac340","Type":"ContainerStarted","Data":"6ded5c471bfde386dd0a10982cb089d89e2075c5fba9802c604bd7bd0136e5c2"} Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.181957 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-chjgp" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.225432 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.247709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-dns-svc\") pod \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.247865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-config\") pod \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.247896 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pz86\" (UniqueName: \"kubernetes.io/projected/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-kube-api-access-2pz86\") pod \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\" (UID: \"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89\") " Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.261564 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-kube-api-access-2pz86" (OuterVolumeSpecName: "kube-api-access-2pz86") pod "8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" (UID: "8e4aeda9-50a7-4f90-b69b-1f02a34e5f89"). InnerVolumeSpecName "kube-api-access-2pz86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.326092 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" (UID: "8e4aeda9-50a7-4f90-b69b-1f02a34e5f89"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.326539 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-config" (OuterVolumeSpecName: "config") pod "8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" (UID: "8e4aeda9-50a7-4f90-b69b-1f02a34e5f89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:48 crc kubenswrapper[4853]: W0127 18:58:48.339706 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c317124_dbd6_4397_a5a0_3cb4d48cfa0d.slice/crio-d85ac64eb8005c7a258e25b1a06eb7c4a957099c511634a87a337df4357333a3 WatchSource:0}: Error finding container d85ac64eb8005c7a258e25b1a06eb7c4a957099c511634a87a337df4357333a3: Status 404 returned error can't find the container with id d85ac64eb8005c7a258e25b1a06eb7c4a957099c511634a87a337df4357333a3 Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.349741 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-catalog-content\") pod \"d714f652-b46e-4843-89a9-0503e169cc42\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.349803 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-utilities\") pod \"d714f652-b46e-4843-89a9-0503e169cc42\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.349845 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t2dk\" (UniqueName: \"kubernetes.io/projected/d714f652-b46e-4843-89a9-0503e169cc42-kube-api-access-6t2dk\") pod \"d714f652-b46e-4843-89a9-0503e169cc42\" (UID: \"d714f652-b46e-4843-89a9-0503e169cc42\") " Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.350195 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.354406 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pz86\" (UniqueName: \"kubernetes.io/projected/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-kube-api-access-2pz86\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.354438 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.355383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-utilities" (OuterVolumeSpecName: "utilities") pod "d714f652-b46e-4843-89a9-0503e169cc42" (UID: "d714f652-b46e-4843-89a9-0503e169cc42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.364201 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-pfpph" podStartSLOduration=2.560134702 podStartE2EDuration="7.364181828s" podCreationTimestamp="2026-01-27 18:58:41 +0000 UTC" firstStartedPulling="2026-01-27 18:58:42.37871487 +0000 UTC m=+964.841257753" lastFinishedPulling="2026-01-27 18:58:47.182761996 +0000 UTC m=+969.645304879" observedRunningTime="2026-01-27 18:58:48.280429353 +0000 UTC m=+970.742972236" watchObservedRunningTime="2026-01-27 18:58:48.364181828 +0000 UTC m=+970.826724711" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.377984 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-z9c58"] Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.379552 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d714f652-b46e-4843-89a9-0503e169cc42" (UID: "d714f652-b46e-4843-89a9-0503e169cc42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.384418 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-z9c58"] Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.413536 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gs4p8"] Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.428966 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rcgrf"] Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.455998 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.456227 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.456241 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d714f652-b46e-4843-89a9-0503e169cc42-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:48 crc kubenswrapper[4853]: E0127 18:58:48.456272 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 18:58:48 crc kubenswrapper[4853]: E0127 18:58:48.456308 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 18:58:48 crc kubenswrapper[4853]: E0127 18:58:48.456374 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift podName:b1859766-1c8c-471c-bae5-4ae46086e8a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:58:56.45635071 +0000 UTC m=+978.918893673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift") pod "swift-storage-0" (UID: "b1859766-1c8c-471c-bae5-4ae46086e8a5") : configmap "swift-ring-files" not found Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.678046 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d714f652-b46e-4843-89a9-0503e169cc42-kube-api-access-6t2dk" (OuterVolumeSpecName: "kube-api-access-6t2dk") pod "d714f652-b46e-4843-89a9-0503e169cc42" (UID: "d714f652-b46e-4843-89a9-0503e169cc42"). InnerVolumeSpecName "kube-api-access-6t2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.729920 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4881-account-create-update-vqvqb"] Jan 27 18:58:48 crc kubenswrapper[4853]: W0127 18:58:48.734630 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb347c8c_cafd_4b44_9862_d69103d33fb7.slice/crio-808cf825bd2627a34909338089d4d1cca3f98ffa534ba39ca8640007f5d112e4 WatchSource:0}: Error finding container 808cf825bd2627a34909338089d4d1cca3f98ffa534ba39ca8640007f5d112e4: Status 404 returned error can't find the container with id 808cf825bd2627a34909338089d4d1cca3f98ffa534ba39ca8640007f5d112e4 Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.745778 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-t8t9z"] Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.752392 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d00f-account-create-update-rrkdb"] Jan 27 18:58:48 crc kubenswrapper[4853]: W0127 18:58:48.752840 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3c82666_fbb7_47cd_9aa8_51fc2f3196cb.slice/crio-a18a33e8a516b0de960e4dbe65c2b315992011e5b3ba8d30b94864fa5458efca WatchSource:0}: Error finding container a18a33e8a516b0de960e4dbe65c2b315992011e5b3ba8d30b94864fa5458efca: Status 404 returned error can't find the container with id a18a33e8a516b0de960e4dbe65c2b315992011e5b3ba8d30b94864fa5458efca Jan 27 18:58:48 crc kubenswrapper[4853]: I0127 18:58:48.761242 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t2dk\" (UniqueName: \"kubernetes.io/projected/d714f652-b46e-4843-89a9-0503e169cc42-kube-api-access-6t2dk\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.198263 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-t8t9z" event={"ID":"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb","Type":"ContainerStarted","Data":"a18a33e8a516b0de960e4dbe65c2b315992011e5b3ba8d30b94864fa5458efca"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.201262 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d00f-account-create-update-rrkdb" event={"ID":"829117d5-2c78-4874-bea6-5d66f13b1f39","Type":"ContainerStarted","Data":"16c3663a978e5477c1fc0d929c63a993c9707b0c6ff1d9fa7ecb30fbb2a733dd"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.201292 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d00f-account-create-update-rrkdb" event={"ID":"829117d5-2c78-4874-bea6-5d66f13b1f39","Type":"ContainerStarted","Data":"c8a2a50c639f563b2f76bb477c95f8c095cbf80f09b332d3f5cd115962f33b70"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.205467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" event={"ID":"8e4aeda9-50a7-4f90-b69b-1f02a34e5f89","Type":"ContainerDied","Data":"9f25af11a73c8e8f742b5b4f72bf65d99b7fe9d71287d006e5a02a4629ac9773"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.205489 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5s7fk" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.205528 4853 scope.go:117] "RemoveContainer" containerID="8d61276950143c86cbb95cf5d6c72c4f16799951a340ec49400797887b8f50f3" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.210747 4853 generic.go:334] "Generic (PLEG): container finished" podID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerID="e9da18a2d3905bdae2a58a40e150d319a91d279c41a01e577d322df9cc99d59d" exitCode=0 Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.211067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6t7wq" event={"ID":"3ba5b341-34ed-484d-ae1d-fe08f998eac4","Type":"ContainerDied","Data":"e9da18a2d3905bdae2a58a40e150d319a91d279c41a01e577d322df9cc99d59d"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.213847 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-99cll" event={"ID":"98d689c7-0b2e-46b3-95f7-5c43aafac340","Type":"ContainerStarted","Data":"4c732ea6557ed81ed449a5d4c50c739ec4b76d3b5b855ae86aa4a85f3509bb8e"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.232507 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcgrf" event={"ID":"a9358d10-5bdb-4f99-96d1-907990452ad6","Type":"ContainerStarted","Data":"f5d8c9f5ef1ab9dba0ebbc08a6ce55683e77f8a727ceaf1c22f32497f98a9c62"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.232555 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcgrf" event={"ID":"a9358d10-5bdb-4f99-96d1-907990452ad6","Type":"ContainerStarted","Data":"8652bd82aace3b592908684374ce3256b2733502d4c033c30db5c6ef1cd657e5"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.237572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4881-account-create-update-vqvqb" event={"ID":"fb347c8c-cafd-4b44-9862-d69103d33fb7","Type":"ContainerStarted","Data":"808cf825bd2627a34909338089d4d1cca3f98ffa534ba39ca8640007f5d112e4"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.239634 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gs4p8" event={"ID":"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d","Type":"ContainerStarted","Data":"4c6da6e067f7205a7c767e08ae62eef284372400b9513006b3eb8e7dea609f41"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.239758 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gs4p8" event={"ID":"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d","Type":"ContainerStarted","Data":"d85ac64eb8005c7a258e25b1a06eb7c4a957099c511634a87a337df4357333a3"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.245692 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-99cll" podStartSLOduration=4.245647555 podStartE2EDuration="4.245647555s" podCreationTimestamp="2026-01-27 18:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:49.244569874 +0000 UTC m=+971.707112757" watchObservedRunningTime="2026-01-27 18:58:49.245647555 +0000 UTC m=+971.708190448" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.249962 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d00f-account-create-update-rrkdb" podStartSLOduration=2.249940437 podStartE2EDuration="2.249940437s" podCreationTimestamp="2026-01-27 18:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:49.219253227 +0000 UTC m=+971.681796130" watchObservedRunningTime="2026-01-27 18:58:49.249940437 +0000 UTC m=+971.712483320" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.267854 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h84g7" event={"ID":"d714f652-b46e-4843-89a9-0503e169cc42","Type":"ContainerDied","Data":"46312511e4959c7c6952973789432fa079267af62e03dda97e4cedc6191daa52"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.267973 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h84g7" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.292784 4853 generic.go:334] "Generic (PLEG): container finished" podID="e8e56ba6-8066-4618-9792-23f022be8786" containerID="76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd" exitCode=0 Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.294054 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" event={"ID":"e8e56ba6-8066-4618-9792-23f022be8786","Type":"ContainerDied","Data":"76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd"} Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.312166 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5s7fk"] Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.326103 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5s7fk"] Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.371153 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-rcgrf" podStartSLOduration=3.371110992 podStartE2EDuration="3.371110992s" podCreationTimestamp="2026-01-27 18:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:49.316581396 +0000 UTC m=+971.779124279" watchObservedRunningTime="2026-01-27 18:58:49.371110992 +0000 UTC m=+971.833653875" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.398523 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-gs4p8" podStartSLOduration=5.398503028 podStartE2EDuration="5.398503028s" podCreationTimestamp="2026-01-27 18:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:49.350992021 +0000 UTC m=+971.813534894" watchObservedRunningTime="2026-01-27 18:58:49.398503028 +0000 UTC m=+971.861045931" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.408583 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-4881-account-create-update-vqvqb" podStartSLOduration=2.408564593 podStartE2EDuration="2.408564593s" podCreationTimestamp="2026-01-27 18:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:49.374845048 +0000 UTC m=+971.837387931" watchObservedRunningTime="2026-01-27 18:58:49.408564593 +0000 UTC m=+971.871107476" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.435357 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h84g7"] Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.444304 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h84g7"] Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.543086 4853 scope.go:117] "RemoveContainer" containerID="ccfd0d8370f201b0ee1aac81d3faff1fdea161c380ab5e38f1d132b0eb44915f" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.884937 4853 scope.go:117] "RemoveContainer" containerID="e4cdc1f572511f1fcc43e48c70199665df037cd2d27b84cddded5e5f4290ab5b" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.925443 4853 scope.go:117] "RemoveContainer" containerID="c431c883bc871bd956051bb74438b9dd533e67d2aa71b4456f89aada3b724ab4" Jan 27 18:58:49 crc kubenswrapper[4853]: I0127 18:58:49.951529 4853 scope.go:117] "RemoveContainer" containerID="c005e59a1cd54ad5321354601b24185eacff748d530059e0477027d6cfb9c31d" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.122768 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" path="/var/lib/kubelet/pods/5318f74c-0368-48c1-be29-dbb63a36ba18/volumes" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.123418 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" path="/var/lib/kubelet/pods/8e4aeda9-50a7-4f90-b69b-1f02a34e5f89/volumes" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.124041 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d714f652-b46e-4843-89a9-0503e169cc42" path="/var/lib/kubelet/pods/d714f652-b46e-4843-89a9-0503e169cc42/volumes" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.301785 4853 generic.go:334] "Generic (PLEG): container finished" podID="a9358d10-5bdb-4f99-96d1-907990452ad6" containerID="f5d8c9f5ef1ab9dba0ebbc08a6ce55683e77f8a727ceaf1c22f32497f98a9c62" exitCode=0 Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.301844 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcgrf" event={"ID":"a9358d10-5bdb-4f99-96d1-907990452ad6","Type":"ContainerDied","Data":"f5d8c9f5ef1ab9dba0ebbc08a6ce55683e77f8a727ceaf1c22f32497f98a9c62"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.304637 4853 generic.go:334] "Generic (PLEG): container finished" podID="c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" containerID="89a451e237295d0aa9c4883b984f339bb349533a253e48a3437f57c72447cfe1" exitCode=0 Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.304731 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-t8t9z" event={"ID":"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb","Type":"ContainerDied","Data":"89a451e237295d0aa9c4883b984f339bb349533a253e48a3437f57c72447cfe1"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.310431 4853 generic.go:334] "Generic (PLEG): container finished" podID="829117d5-2c78-4874-bea6-5d66f13b1f39" containerID="16c3663a978e5477c1fc0d929c63a993c9707b0c6ff1d9fa7ecb30fbb2a733dd" exitCode=0 Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.310551 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d00f-account-create-update-rrkdb" event={"ID":"829117d5-2c78-4874-bea6-5d66f13b1f39","Type":"ContainerDied","Data":"16c3663a978e5477c1fc0d929c63a993c9707b0c6ff1d9fa7ecb30fbb2a733dd"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.312722 4853 generic.go:334] "Generic (PLEG): container finished" podID="fb347c8c-cafd-4b44-9862-d69103d33fb7" containerID="c4b899717fd995b71b389851a51450ee1094f74a599172e789a19798e9ceb25c" exitCode=0 Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.312817 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4881-account-create-update-vqvqb" event={"ID":"fb347c8c-cafd-4b44-9862-d69103d33fb7","Type":"ContainerDied","Data":"c4b899717fd995b71b389851a51450ee1094f74a599172e789a19798e9ceb25c"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.318188 4853 generic.go:334] "Generic (PLEG): container finished" podID="7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" containerID="4c6da6e067f7205a7c767e08ae62eef284372400b9513006b3eb8e7dea609f41" exitCode=0 Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.318280 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gs4p8" event={"ID":"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d","Type":"ContainerDied","Data":"4c6da6e067f7205a7c767e08ae62eef284372400b9513006b3eb8e7dea609f41"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.320195 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"24f3c135-8664-4bbd-87bf-dd93c3595195","Type":"ContainerStarted","Data":"026a8753709831dab7d97d35dfa04a4dc32882531d0c746ee4b4d3f48d82aa4f"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.320242 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"24f3c135-8664-4bbd-87bf-dd93c3595195","Type":"ContainerStarted","Data":"2f468b4a99b1ed22f659020065f6242d5c0d7c4baa742bc4e1d3b28b4c5d12c1"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.320611 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.323422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" event={"ID":"e8e56ba6-8066-4618-9792-23f022be8786","Type":"ContainerStarted","Data":"1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.324921 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6t7wq" event={"ID":"3ba5b341-34ed-484d-ae1d-fe08f998eac4","Type":"ContainerStarted","Data":"781a77592d5a87c22d418c18ca082f3fc37812941ad2e02f82d6da689b1529c9"} Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.325145 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.373739 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-6t7wq" podStartSLOduration=5.373720544 podStartE2EDuration="5.373720544s" podCreationTimestamp="2026-01-27 18:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:50.372021576 +0000 UTC m=+972.834564459" watchObservedRunningTime="2026-01-27 18:58:50.373720544 +0000 UTC m=+972.836263427" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.405149 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.762631693 podStartE2EDuration="4.405096094s" podCreationTimestamp="2026-01-27 18:58:46 +0000 UTC" firstStartedPulling="2026-01-27 18:58:47.953781373 +0000 UTC m=+970.416324256" lastFinishedPulling="2026-01-27 18:58:49.596245774 +0000 UTC m=+972.058788657" observedRunningTime="2026-01-27 18:58:50.400073061 +0000 UTC m=+972.862615944" watchObservedRunningTime="2026-01-27 18:58:50.405096094 +0000 UTC m=+972.867638967" Jan 27 18:58:50 crc kubenswrapper[4853]: I0127 18:58:50.743568 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.768676 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.790702 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" podStartSLOduration=6.790684593 podStartE2EDuration="6.790684593s" podCreationTimestamp="2026-01-27 18:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:58:50.418542615 +0000 UTC m=+972.881085488" watchObservedRunningTime="2026-01-27 18:58:51.790684593 +0000 UTC m=+974.253227476" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.816952 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-operator-scripts\") pod \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.817104 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp84b\" (UniqueName: \"kubernetes.io/projected/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-kube-api-access-cp84b\") pod \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\" (UID: \"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d\") " Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.817850 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" (UID: "7c317124-dbd6-4397-a5a0-3cb4d48cfa0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.837468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-kube-api-access-cp84b" (OuterVolumeSpecName: "kube-api-access-cp84b") pod "7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" (UID: "7c317124-dbd6-4397-a5a0-3cb4d48cfa0d"). InnerVolumeSpecName "kube-api-access-cp84b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.921027 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.921058 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp84b\" (UniqueName: \"kubernetes.io/projected/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d-kube-api-access-cp84b\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.991733 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:51 crc kubenswrapper[4853]: I0127 18:58:51.998376 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.013334 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.022062 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4rmr\" (UniqueName: \"kubernetes.io/projected/fb347c8c-cafd-4b44-9862-d69103d33fb7-kube-api-access-t4rmr\") pod \"fb347c8c-cafd-4b44-9862-d69103d33fb7\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.022302 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb347c8c-cafd-4b44-9862-d69103d33fb7-operator-scripts\") pod \"fb347c8c-cafd-4b44-9862-d69103d33fb7\" (UID: \"fb347c8c-cafd-4b44-9862-d69103d33fb7\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.023309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb347c8c-cafd-4b44-9862-d69103d33fb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb347c8c-cafd-4b44-9862-d69103d33fb7" (UID: "fb347c8c-cafd-4b44-9862-d69103d33fb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.065477 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb347c8c-cafd-4b44-9862-d69103d33fb7-kube-api-access-t4rmr" (OuterVolumeSpecName: "kube-api-access-t4rmr") pod "fb347c8c-cafd-4b44-9862-d69103d33fb7" (UID: "fb347c8c-cafd-4b44-9862-d69103d33fb7"). InnerVolumeSpecName "kube-api-access-t4rmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.082325 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.123856 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktwjq\" (UniqueName: \"kubernetes.io/projected/829117d5-2c78-4874-bea6-5d66f13b1f39-kube-api-access-ktwjq\") pod \"829117d5-2c78-4874-bea6-5d66f13b1f39\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.124276 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvjn5\" (UniqueName: \"kubernetes.io/projected/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-kube-api-access-xvjn5\") pod \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.124510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829117d5-2c78-4874-bea6-5d66f13b1f39-operator-scripts\") pod \"829117d5-2c78-4874-bea6-5d66f13b1f39\" (UID: \"829117d5-2c78-4874-bea6-5d66f13b1f39\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.124653 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9358d10-5bdb-4f99-96d1-907990452ad6-operator-scripts\") pod \"a9358d10-5bdb-4f99-96d1-907990452ad6\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.124755 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6pr\" (UniqueName: \"kubernetes.io/projected/a9358d10-5bdb-4f99-96d1-907990452ad6-kube-api-access-9z6pr\") pod \"a9358d10-5bdb-4f99-96d1-907990452ad6\" (UID: \"a9358d10-5bdb-4f99-96d1-907990452ad6\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.124875 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-operator-scripts\") pod \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\" (UID: \"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb\") " Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.125536 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb347c8c-cafd-4b44-9862-d69103d33fb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.125634 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4rmr\" (UniqueName: \"kubernetes.io/projected/fb347c8c-cafd-4b44-9862-d69103d33fb7-kube-api-access-t4rmr\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.125546 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829117d5-2c78-4874-bea6-5d66f13b1f39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "829117d5-2c78-4874-bea6-5d66f13b1f39" (UID: "829117d5-2c78-4874-bea6-5d66f13b1f39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.125986 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9358d10-5bdb-4f99-96d1-907990452ad6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9358d10-5bdb-4f99-96d1-907990452ad6" (UID: "a9358d10-5bdb-4f99-96d1-907990452ad6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.126998 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" (UID: "c3c82666-fbb7-47cd-9aa8-51fc2f3196cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.127415 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829117d5-2c78-4874-bea6-5d66f13b1f39-kube-api-access-ktwjq" (OuterVolumeSpecName: "kube-api-access-ktwjq") pod "829117d5-2c78-4874-bea6-5d66f13b1f39" (UID: "829117d5-2c78-4874-bea6-5d66f13b1f39"). InnerVolumeSpecName "kube-api-access-ktwjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.127479 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-kube-api-access-xvjn5" (OuterVolumeSpecName: "kube-api-access-xvjn5") pod "c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" (UID: "c3c82666-fbb7-47cd-9aa8-51fc2f3196cb"). InnerVolumeSpecName "kube-api-access-xvjn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.139468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9358d10-5bdb-4f99-96d1-907990452ad6-kube-api-access-9z6pr" (OuterVolumeSpecName: "kube-api-access-9z6pr") pod "a9358d10-5bdb-4f99-96d1-907990452ad6" (UID: "a9358d10-5bdb-4f99-96d1-907990452ad6"). InnerVolumeSpecName "kube-api-access-9z6pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.227892 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829117d5-2c78-4874-bea6-5d66f13b1f39-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.227949 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9358d10-5bdb-4f99-96d1-907990452ad6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.227961 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z6pr\" (UniqueName: \"kubernetes.io/projected/a9358d10-5bdb-4f99-96d1-907990452ad6-kube-api-access-9z6pr\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.227970 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.227979 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktwjq\" (UniqueName: \"kubernetes.io/projected/829117d5-2c78-4874-bea6-5d66f13b1f39-kube-api-access-ktwjq\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.227988 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvjn5\" (UniqueName: \"kubernetes.io/projected/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb-kube-api-access-xvjn5\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.342300 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-t8t9z" event={"ID":"c3c82666-fbb7-47cd-9aa8-51fc2f3196cb","Type":"ContainerDied","Data":"a18a33e8a516b0de960e4dbe65c2b315992011e5b3ba8d30b94864fa5458efca"} Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.342347 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18a33e8a516b0de960e4dbe65c2b315992011e5b3ba8d30b94864fa5458efca" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.342307 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-t8t9z" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.344480 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d00f-account-create-update-rrkdb" event={"ID":"829117d5-2c78-4874-bea6-5d66f13b1f39","Type":"ContainerDied","Data":"c8a2a50c639f563b2f76bb477c95f8c095cbf80f09b332d3f5cd115962f33b70"} Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.344550 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8a2a50c639f563b2f76bb477c95f8c095cbf80f09b332d3f5cd115962f33b70" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.344637 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d00f-account-create-update-rrkdb" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.348455 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4881-account-create-update-vqvqb" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.348452 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4881-account-create-update-vqvqb" event={"ID":"fb347c8c-cafd-4b44-9862-d69103d33fb7","Type":"ContainerDied","Data":"808cf825bd2627a34909338089d4d1cca3f98ffa534ba39ca8640007f5d112e4"} Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.348623 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808cf825bd2627a34909338089d4d1cca3f98ffa534ba39ca8640007f5d112e4" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.351163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gs4p8" event={"ID":"7c317124-dbd6-4397-a5a0-3cb4d48cfa0d","Type":"ContainerDied","Data":"d85ac64eb8005c7a258e25b1a06eb7c4a957099c511634a87a337df4357333a3"} Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.351207 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85ac64eb8005c7a258e25b1a06eb7c4a957099c511634a87a337df4357333a3" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.351259 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gs4p8" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.354663 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcgrf" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.365145 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcgrf" event={"ID":"a9358d10-5bdb-4f99-96d1-907990452ad6","Type":"ContainerDied","Data":"8652bd82aace3b592908684374ce3256b2733502d4c033c30db5c6ef1cd657e5"} Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.365198 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8652bd82aace3b592908684374ce3256b2733502d4c033c30db5c6ef1cd657e5" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.822699 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bpr58"] Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823027 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829117d5-2c78-4874-bea6-5d66f13b1f39" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823040 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="829117d5-2c78-4874-bea6-5d66f13b1f39" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823051 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823057 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823072 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerName="init" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823078 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerName="init" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823091 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerName="init" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823096 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerName="init" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823105 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerName="dnsmasq-dns" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823111 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerName="dnsmasq-dns" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823141 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerName="dnsmasq-dns" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823148 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerName="dnsmasq-dns" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823158 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="extract-utilities" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823164 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="extract-utilities" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823172 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="extract-content" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823177 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="extract-content" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823184 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9358d10-5bdb-4f99-96d1-907990452ad6" containerName="mariadb-database-create" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823189 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9358d10-5bdb-4f99-96d1-907990452ad6" containerName="mariadb-database-create" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823196 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb347c8c-cafd-4b44-9862-d69103d33fb7" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823201 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb347c8c-cafd-4b44-9862-d69103d33fb7" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823209 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" containerName="mariadb-database-create" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823216 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" containerName="mariadb-database-create" Jan 27 18:58:52 crc kubenswrapper[4853]: E0127 18:58:52.823225 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="registry-server" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823231 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="registry-server" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823381 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5318f74c-0368-48c1-be29-dbb63a36ba18" containerName="dnsmasq-dns" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823393 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823402 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9358d10-5bdb-4f99-96d1-907990452ad6" containerName="mariadb-database-create" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823408 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" containerName="mariadb-database-create" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823419 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb347c8c-cafd-4b44-9862-d69103d33fb7" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823428 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4aeda9-50a7-4f90-b69b-1f02a34e5f89" containerName="dnsmasq-dns" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823438 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="829117d5-2c78-4874-bea6-5d66f13b1f39" containerName="mariadb-account-create-update" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823447 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d714f652-b46e-4843-89a9-0503e169cc42" containerName="registry-server" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.823941 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.827807 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.828367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-84jgd" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.848965 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bpr58"] Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.950974 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9lm\" (UniqueName: \"kubernetes.io/projected/1715d18d-b411-407d-9b52-d7b0bbd850f4-kube-api-access-wj9lm\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.951519 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-db-sync-config-data\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.951738 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-config-data\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:52 crc kubenswrapper[4853]: I0127 18:58:52.951791 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-combined-ca-bundle\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.053095 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-config-data\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.053166 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-combined-ca-bundle\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.053217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9lm\" (UniqueName: \"kubernetes.io/projected/1715d18d-b411-407d-9b52-d7b0bbd850f4-kube-api-access-wj9lm\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.053259 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-db-sync-config-data\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.059358 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-db-sync-config-data\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.059396 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-combined-ca-bundle\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.061140 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-config-data\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.084903 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9lm\" (UniqueName: \"kubernetes.io/projected/1715d18d-b411-407d-9b52-d7b0bbd850f4-kube-api-access-wj9lm\") pod \"glance-db-sync-bpr58\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.151157 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bpr58" Jan 27 18:58:53 crc kubenswrapper[4853]: I0127 18:58:53.755639 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bpr58"] Jan 27 18:58:53 crc kubenswrapper[4853]: W0127 18:58:53.766075 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1715d18d_b411_407d_9b52_d7b0bbd850f4.slice/crio-c1b25e6aab80600ae80247d3d9ce7aa9357ad0d50128e5dc0a1740b47689a7f5 WatchSource:0}: Error finding container c1b25e6aab80600ae80247d3d9ce7aa9357ad0d50128e5dc0a1740b47689a7f5: Status 404 returned error can't find the container with id c1b25e6aab80600ae80247d3d9ce7aa9357ad0d50128e5dc0a1740b47689a7f5 Jan 27 18:58:54 crc kubenswrapper[4853]: I0127 18:58:54.371985 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bpr58" event={"ID":"1715d18d-b411-407d-9b52-d7b0bbd850f4","Type":"ContainerStarted","Data":"c1b25e6aab80600ae80247d3d9ce7aa9357ad0d50128e5dc0a1740b47689a7f5"} Jan 27 18:58:55 crc kubenswrapper[4853]: I0127 18:58:55.555250 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gs4p8"] Jan 27 18:58:55 crc kubenswrapper[4853]: I0127 18:58:55.562982 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gs4p8"] Jan 27 18:58:55 crc kubenswrapper[4853]: I0127 18:58:55.744323 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.093757 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.128210 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c317124-dbd6-4397-a5a0-3cb4d48cfa0d" path="/var/lib/kubelet/pods/7c317124-dbd6-4397-a5a0-3cb4d48cfa0d/volumes" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.161863 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-hp9jm"] Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.396879 4853 generic.go:334] "Generic (PLEG): container finished" podID="119564cc-719b-4691-91d5-672513ed9acf" containerID="f90369aa9465f3dc72afec91131ea82e980e63fdb1c9b54e4def008bf01562fa" exitCode=0 Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.396941 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfpph" event={"ID":"119564cc-719b-4691-91d5-672513ed9acf","Type":"ContainerDied","Data":"f90369aa9465f3dc72afec91131ea82e980e63fdb1c9b54e4def008bf01562fa"} Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.397222 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" podUID="e8e56ba6-8066-4618-9792-23f022be8786" containerName="dnsmasq-dns" containerID="cri-o://1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6" gracePeriod=10 Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.536364 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.544718 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1859766-1c8c-471c-bae5-4ae46086e8a5-etc-swift\") pod \"swift-storage-0\" (UID: \"b1859766-1c8c-471c-bae5-4ae46086e8a5\") " pod="openstack/swift-storage-0" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.600632 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.839191 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.942523 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-dns-svc\") pod \"e8e56ba6-8066-4618-9792-23f022be8786\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.942664 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-config\") pod \"e8e56ba6-8066-4618-9792-23f022be8786\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.942819 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-ovsdbserver-sb\") pod \"e8e56ba6-8066-4618-9792-23f022be8786\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.942855 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqq9\" (UniqueName: \"kubernetes.io/projected/e8e56ba6-8066-4618-9792-23f022be8786-kube-api-access-7vqq9\") pod \"e8e56ba6-8066-4618-9792-23f022be8786\" (UID: \"e8e56ba6-8066-4618-9792-23f022be8786\") " Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.948191 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e56ba6-8066-4618-9792-23f022be8786-kube-api-access-7vqq9" (OuterVolumeSpecName: "kube-api-access-7vqq9") pod "e8e56ba6-8066-4618-9792-23f022be8786" (UID: "e8e56ba6-8066-4618-9792-23f022be8786"). InnerVolumeSpecName "kube-api-access-7vqq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.981920 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8e56ba6-8066-4618-9792-23f022be8786" (UID: "e8e56ba6-8066-4618-9792-23f022be8786"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.984233 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-config" (OuterVolumeSpecName: "config") pod "e8e56ba6-8066-4618-9792-23f022be8786" (UID: "e8e56ba6-8066-4618-9792-23f022be8786"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:56 crc kubenswrapper[4853]: I0127 18:58:56.985182 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8e56ba6-8066-4618-9792-23f022be8786" (UID: "e8e56ba6-8066-4618-9792-23f022be8786"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.045054 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.045095 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.045108 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8e56ba6-8066-4618-9792-23f022be8786-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.045172 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqq9\" (UniqueName: \"kubernetes.io/projected/e8e56ba6-8066-4618-9792-23f022be8786-kube-api-access-7vqq9\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.407203 4853 generic.go:334] "Generic (PLEG): container finished" podID="e8e56ba6-8066-4618-9792-23f022be8786" containerID="1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6" exitCode=0 Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.407445 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.408995 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" event={"ID":"e8e56ba6-8066-4618-9792-23f022be8786","Type":"ContainerDied","Data":"1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6"} Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.409063 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-hp9jm" event={"ID":"e8e56ba6-8066-4618-9792-23f022be8786","Type":"ContainerDied","Data":"7e92dce77616febc64c3ffc95b0535ddfc72990b067c224f7539f9332436c51a"} Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.409086 4853 scope.go:117] "RemoveContainer" containerID="1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.443385 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-hp9jm"] Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.446333 4853 scope.go:117] "RemoveContainer" containerID="76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.461667 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-hp9jm"] Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.497814 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.500341 4853 scope.go:117] "RemoveContainer" containerID="1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6" Jan 27 18:58:57 crc kubenswrapper[4853]: E0127 18:58:57.500938 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6\": container with ID starting with 1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6 not found: ID does not exist" containerID="1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.500972 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6"} err="failed to get container status \"1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6\": rpc error: code = NotFound desc = could not find container \"1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6\": container with ID starting with 1e01c33121d72a7fb09564ac6c87045979f8a5cb09944c0a2d715dec175947a6 not found: ID does not exist" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.501004 4853 scope.go:117] "RemoveContainer" containerID="76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd" Jan 27 18:58:57 crc kubenswrapper[4853]: E0127 18:58:57.501979 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd\": container with ID starting with 76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd not found: ID does not exist" containerID="76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.502018 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd"} err="failed to get container status \"76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd\": rpc error: code = NotFound desc = could not find container \"76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd\": container with ID starting with 76121260ad37f6a5e585a7e97d7a8b5f125d8f552a24ae6e002041c339d181fd not found: ID does not exist" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.833464 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.964619 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4x47\" (UniqueName: \"kubernetes.io/projected/119564cc-719b-4691-91d5-672513ed9acf-kube-api-access-g4x47\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.964684 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-combined-ca-bundle\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.964747 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-ring-data-devices\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.964767 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-scripts\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.964781 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-dispersionconf\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.964812 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-swiftconf\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.965259 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/119564cc-719b-4691-91d5-672513ed9acf-etc-swift\") pod \"119564cc-719b-4691-91d5-672513ed9acf\" (UID: \"119564cc-719b-4691-91d5-672513ed9acf\") " Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.965607 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.965939 4853 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.966331 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119564cc-719b-4691-91d5-672513ed9acf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.982353 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119564cc-719b-4691-91d5-672513ed9acf-kube-api-access-g4x47" (OuterVolumeSpecName: "kube-api-access-g4x47") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "kube-api-access-g4x47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.988420 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.989625 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-scripts" (OuterVolumeSpecName: "scripts") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.992811 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:57 crc kubenswrapper[4853]: I0127 18:58:57.999637 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "119564cc-719b-4691-91d5-672513ed9acf" (UID: "119564cc-719b-4691-91d5-672513ed9acf"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.066965 4853 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/119564cc-719b-4691-91d5-672513ed9acf-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.066994 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4x47\" (UniqueName: \"kubernetes.io/projected/119564cc-719b-4691-91d5-672513ed9acf-kube-api-access-g4x47\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.067004 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.067012 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119564cc-719b-4691-91d5-672513ed9acf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.067020 4853 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.067027 4853 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/119564cc-719b-4691-91d5-672513ed9acf-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.132927 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e56ba6-8066-4618-9792-23f022be8786" path="/var/lib/kubelet/pods/e8e56ba6-8066-4618-9792-23f022be8786/volumes" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.417381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"7f552522128f607f6c0aa30b581c0bfaa0df3c2cd08d2ba29f7d6df5a939a912"} Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.421330 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfpph" event={"ID":"119564cc-719b-4691-91d5-672513ed9acf","Type":"ContainerDied","Data":"3c268dcbf64f8d579dfcdd8b0f80e4545ecc8fa68833d64aa21709a2b7b522c3"} Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.421383 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c268dcbf64f8d579dfcdd8b0f80e4545ecc8fa68833d64aa21709a2b7b522c3" Jan 27 18:58:58 crc kubenswrapper[4853]: I0127 18:58:58.421479 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfpph" Jan 27 18:58:59 crc kubenswrapper[4853]: I0127 18:58:59.435347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"2a1014c439435322ead61691941ed1def2c890ea5ddff0c5350c169930a7b272"} Jan 27 18:58:59 crc kubenswrapper[4853]: I0127 18:58:59.435697 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"3002a8002c19e6085539e11033aaa8d151782d1b374c7f38436360bc8c3a3c7b"} Jan 27 18:58:59 crc kubenswrapper[4853]: I0127 18:58:59.435708 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"884cf8c8801b3babcbad83130698dd9a98b25eb778955dde001cdac07e9cb6b4"} Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.563058 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-djxcp"] Jan 27 18:59:00 crc kubenswrapper[4853]: E0127 18:59:00.563945 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e56ba6-8066-4618-9792-23f022be8786" containerName="init" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.563961 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e56ba6-8066-4618-9792-23f022be8786" containerName="init" Jan 27 18:59:00 crc kubenswrapper[4853]: E0127 18:59:00.564003 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e56ba6-8066-4618-9792-23f022be8786" containerName="dnsmasq-dns" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.564009 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e56ba6-8066-4618-9792-23f022be8786" containerName="dnsmasq-dns" Jan 27 18:59:00 crc kubenswrapper[4853]: E0127 18:59:00.564033 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119564cc-719b-4691-91d5-672513ed9acf" containerName="swift-ring-rebalance" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.564061 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="119564cc-719b-4691-91d5-672513ed9acf" containerName="swift-ring-rebalance" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.564275 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e56ba6-8066-4618-9792-23f022be8786" containerName="dnsmasq-dns" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.564319 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="119564cc-719b-4691-91d5-672513ed9acf" containerName="swift-ring-rebalance" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.565172 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.567464 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.575731 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-djxcp"] Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.612218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1fd930-d712-4e54-be4d-2a30c3c7436d-operator-scripts\") pod \"root-account-create-update-djxcp\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.612285 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5x8\" (UniqueName: \"kubernetes.io/projected/bb1fd930-d712-4e54-be4d-2a30c3c7436d-kube-api-access-dx5x8\") pod \"root-account-create-update-djxcp\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.720988 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1fd930-d712-4e54-be4d-2a30c3c7436d-operator-scripts\") pod \"root-account-create-update-djxcp\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.721330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5x8\" (UniqueName: \"kubernetes.io/projected/bb1fd930-d712-4e54-be4d-2a30c3c7436d-kube-api-access-dx5x8\") pod \"root-account-create-update-djxcp\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.721850 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1fd930-d712-4e54-be4d-2a30c3c7436d-operator-scripts\") pod \"root-account-create-update-djxcp\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.741300 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5x8\" (UniqueName: \"kubernetes.io/projected/bb1fd930-d712-4e54-be4d-2a30c3c7436d-kube-api-access-dx5x8\") pod \"root-account-create-update-djxcp\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:00 crc kubenswrapper[4853]: I0127 18:59:00.936697 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.620555 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xkd2q" podUID="4d52eb59-75a5-4074-8bfb-c9dab8b0c97f" containerName="ovn-controller" probeResult="failure" output=< Jan 27 18:59:02 crc kubenswrapper[4853]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 18:59:02 crc kubenswrapper[4853]: > Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.682421 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.708361 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qgd5v" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.901424 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xkd2q-config-czv8w"] Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.903362 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.906089 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.912422 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xkd2q-config-czv8w"] Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.975991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.976242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csf57\" (UniqueName: \"kubernetes.io/projected/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-kube-api-access-csf57\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.976363 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-log-ovn\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.976498 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run-ovn\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.976622 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-scripts\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:02 crc kubenswrapper[4853]: I0127 18:59:02.976674 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-additional-scripts\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.077992 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-log-ovn\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run-ovn\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-scripts\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078184 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-additional-scripts\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078210 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csf57\" (UniqueName: \"kubernetes.io/projected/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-kube-api-access-csf57\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078471 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-log-ovn\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.078662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run-ovn\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.079754 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-additional-scripts\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.080229 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.080872 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-scripts\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.095725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csf57\" (UniqueName: \"kubernetes.io/projected/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-kube-api-access-csf57\") pod \"ovn-controller-xkd2q-config-czv8w\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.238800 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.480926 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f56570a-76ed-4182-b147-6288fa56d729" containerID="86b479d5fa65aa088a46498e1a0e6a8485fbcdd68b9fc36a9d1790fcd627a629" exitCode=0 Jan 27 18:59:03 crc kubenswrapper[4853]: I0127 18:59:03.480979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56570a-76ed-4182-b147-6288fa56d729","Type":"ContainerDied","Data":"86b479d5fa65aa088a46498e1a0e6a8485fbcdd68b9fc36a9d1790fcd627a629"} Jan 27 18:59:06 crc kubenswrapper[4853]: I0127 18:59:06.887980 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 18:59:07 crc kubenswrapper[4853]: I0127 18:59:07.633870 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xkd2q" podUID="4d52eb59-75a5-4074-8bfb-c9dab8b0c97f" containerName="ovn-controller" probeResult="failure" output=< Jan 27 18:59:07 crc kubenswrapper[4853]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 18:59:07 crc kubenswrapper[4853]: > Jan 27 18:59:08 crc kubenswrapper[4853]: I0127 18:59:08.523420 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56570a-76ed-4182-b147-6288fa56d729","Type":"ContainerStarted","Data":"2be1acb70ae90dbe469bc2eed2b8a6a575e3dc6e9f3900754b00c5cf61322054"} Jan 27 18:59:08 crc kubenswrapper[4853]: I0127 18:59:08.526447 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"0eec3d6afc040e0f25627c6d59bc4597c34ba618f3b9cd6904b039aebc0d3168"} Jan 27 18:59:08 crc kubenswrapper[4853]: I0127 18:59:08.559010 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.091099153 podStartE2EDuration="1m16.558991263s" podCreationTimestamp="2026-01-27 18:57:52 +0000 UTC" firstStartedPulling="2026-01-27 18:57:54.526480206 +0000 UTC m=+916.989023089" lastFinishedPulling="2026-01-27 18:58:29.994372316 +0000 UTC m=+952.456915199" observedRunningTime="2026-01-27 18:59:08.554995019 +0000 UTC m=+991.017537902" watchObservedRunningTime="2026-01-27 18:59:08.558991263 +0000 UTC m=+991.021534146" Jan 27 18:59:08 crc kubenswrapper[4853]: I0127 18:59:08.588233 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-djxcp"] Jan 27 18:59:08 crc kubenswrapper[4853]: W0127 18:59:08.593294 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb1fd930_d712_4e54_be4d_2a30c3c7436d.slice/crio-cbc5f70cf343fadd9453724c85e683c6e711a08e494736e774897861f649a718 WatchSource:0}: Error finding container cbc5f70cf343fadd9453724c85e683c6e711a08e494736e774897861f649a718: Status 404 returned error can't find the container with id cbc5f70cf343fadd9453724c85e683c6e711a08e494736e774897861f649a718 Jan 27 18:59:08 crc kubenswrapper[4853]: I0127 18:59:08.648977 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xkd2q-config-czv8w"] Jan 27 18:59:08 crc kubenswrapper[4853]: W0127 18:59:08.663585 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e0937f5_e0c4_4067_b80c_2e4188beb2a9.slice/crio-1142d40a7c2933706496c80ed60533f3eaf9cb8391d0c2c0be7f56743a2f8f62 WatchSource:0}: Error finding container 1142d40a7c2933706496c80ed60533f3eaf9cb8391d0c2c0be7f56743a2f8f62: Status 404 returned error can't find the container with id 1142d40a7c2933706496c80ed60533f3eaf9cb8391d0c2c0be7f56743a2f8f62 Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.542749 4853 generic.go:334] "Generic (PLEG): container finished" podID="2e0937f5-e0c4-4067-b80c-2e4188beb2a9" containerID="481fa2bce2050b36791d91e8cd48702c369a31e0ccd7feed47ff91320ce00792" exitCode=0 Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.542945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q-config-czv8w" event={"ID":"2e0937f5-e0c4-4067-b80c-2e4188beb2a9","Type":"ContainerDied","Data":"481fa2bce2050b36791d91e8cd48702c369a31e0ccd7feed47ff91320ce00792"} Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.543172 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q-config-czv8w" event={"ID":"2e0937f5-e0c4-4067-b80c-2e4188beb2a9","Type":"ContainerStarted","Data":"1142d40a7c2933706496c80ed60533f3eaf9cb8391d0c2c0be7f56743a2f8f62"} Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.545860 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bpr58" event={"ID":"1715d18d-b411-407d-9b52-d7b0bbd850f4","Type":"ContainerStarted","Data":"d0e7ae2912174648fc97eb8d5d5d8b251b773709670a0698a55f108ee079f000"} Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.558804 4853 generic.go:334] "Generic (PLEG): container finished" podID="bb1fd930-d712-4e54-be4d-2a30c3c7436d" containerID="053d14de0c4cdbebf2f1ff2859a14c8ff7183e2a07ace1835f0e55a6aa02ec1a" exitCode=0 Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.558863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-djxcp" event={"ID":"bb1fd930-d712-4e54-be4d-2a30c3c7436d","Type":"ContainerDied","Data":"053d14de0c4cdbebf2f1ff2859a14c8ff7183e2a07ace1835f0e55a6aa02ec1a"} Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.558896 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-djxcp" event={"ID":"bb1fd930-d712-4e54-be4d-2a30c3c7436d","Type":"ContainerStarted","Data":"cbc5f70cf343fadd9453724c85e683c6e711a08e494736e774897861f649a718"} Jan 27 18:59:09 crc kubenswrapper[4853]: I0127 18:59:09.619070 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bpr58" podStartSLOduration=3.291404573 podStartE2EDuration="17.619041704s" podCreationTimestamp="2026-01-27 18:58:52 +0000 UTC" firstStartedPulling="2026-01-27 18:58:53.778038211 +0000 UTC m=+976.240581084" lastFinishedPulling="2026-01-27 18:59:08.105675332 +0000 UTC m=+990.568218215" observedRunningTime="2026-01-27 18:59:09.603890284 +0000 UTC m=+992.066433177" watchObservedRunningTime="2026-01-27 18:59:09.619041704 +0000 UTC m=+992.081584587" Jan 27 18:59:10 crc kubenswrapper[4853]: I0127 18:59:10.610746 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"815daad8fd6d376a4eefa4d41067359c528330de8ba2df8c3699cc5177b1c79c"} Jan 27 18:59:10 crc kubenswrapper[4853]: I0127 18:59:10.611432 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"1d76f3c9e6c66b735b465a255ff41e46d84578da871ed6e453ab6095111f8b73"} Jan 27 18:59:10 crc kubenswrapper[4853]: I0127 18:59:10.611443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"e0aac6b4537435dec8ac4183e7263fccf5c5c284bab1e8d1a0bcbf2fb422fa47"} Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.079581 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.160985 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx5x8\" (UniqueName: \"kubernetes.io/projected/bb1fd930-d712-4e54-be4d-2a30c3c7436d-kube-api-access-dx5x8\") pod \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.161172 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1fd930-d712-4e54-be4d-2a30c3c7436d-operator-scripts\") pod \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\" (UID: \"bb1fd930-d712-4e54-be4d-2a30c3c7436d\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.161890 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1fd930-d712-4e54-be4d-2a30c3c7436d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb1fd930-d712-4e54-be4d-2a30c3c7436d" (UID: "bb1fd930-d712-4e54-be4d-2a30c3c7436d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.169467 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1fd930-d712-4e54-be4d-2a30c3c7436d-kube-api-access-dx5x8" (OuterVolumeSpecName: "kube-api-access-dx5x8") pod "bb1fd930-d712-4e54-be4d-2a30c3c7436d" (UID: "bb1fd930-d712-4e54-be4d-2a30c3c7436d"). InnerVolumeSpecName "kube-api-access-dx5x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.220515 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.263330 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1fd930-d712-4e54-be4d-2a30c3c7436d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.263364 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx5x8\" (UniqueName: \"kubernetes.io/projected/bb1fd930-d712-4e54-be4d-2a30c3c7436d-kube-api-access-dx5x8\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.364324 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csf57\" (UniqueName: \"kubernetes.io/projected/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-kube-api-access-csf57\") pod \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.364900 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-scripts\") pod \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.364943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-log-ovn\") pod \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365037 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run-ovn\") pod \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365072 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2e0937f5-e0c4-4067-b80c-2e4188beb2a9" (UID: "2e0937f5-e0c4-4067-b80c-2e4188beb2a9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365092 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-additional-scripts\") pod \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2e0937f5-e0c4-4067-b80c-2e4188beb2a9" (UID: "2e0937f5-e0c4-4067-b80c-2e4188beb2a9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365251 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run\") pod \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\" (UID: \"2e0937f5-e0c4-4067-b80c-2e4188beb2a9\") " Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365350 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run" (OuterVolumeSpecName: "var-run") pod "2e0937f5-e0c4-4067-b80c-2e4188beb2a9" (UID: "2e0937f5-e0c4-4067-b80c-2e4188beb2a9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365709 4853 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365735 4853 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365747 4853 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.365960 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2e0937f5-e0c4-4067-b80c-2e4188beb2a9" (UID: "2e0937f5-e0c4-4067-b80c-2e4188beb2a9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.366246 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-scripts" (OuterVolumeSpecName: "scripts") pod "2e0937f5-e0c4-4067-b80c-2e4188beb2a9" (UID: "2e0937f5-e0c4-4067-b80c-2e4188beb2a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.367916 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-kube-api-access-csf57" (OuterVolumeSpecName: "kube-api-access-csf57") pod "2e0937f5-e0c4-4067-b80c-2e4188beb2a9" (UID: "2e0937f5-e0c4-4067-b80c-2e4188beb2a9"). InnerVolumeSpecName "kube-api-access-csf57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.468015 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csf57\" (UniqueName: \"kubernetes.io/projected/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-kube-api-access-csf57\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.468065 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.468078 4853 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0937f5-e0c4-4067-b80c-2e4188beb2a9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.625365 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"0bd6273c8f66f31124722fa4e1f88803c3d5202b940f832b4796cd4b5e38943e"} Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.629024 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-djxcp" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.629070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-djxcp" event={"ID":"bb1fd930-d712-4e54-be4d-2a30c3c7436d","Type":"ContainerDied","Data":"cbc5f70cf343fadd9453724c85e683c6e711a08e494736e774897861f649a718"} Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.629351 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc5f70cf343fadd9453724c85e683c6e711a08e494736e774897861f649a718" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.630863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q-config-czv8w" event={"ID":"2e0937f5-e0c4-4067-b80c-2e4188beb2a9","Type":"ContainerDied","Data":"1142d40a7c2933706496c80ed60533f3eaf9cb8391d0c2c0be7f56743a2f8f62"} Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.630908 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1142d40a7c2933706496c80ed60533f3eaf9cb8391d0c2c0be7f56743a2f8f62" Jan 27 18:59:11 crc kubenswrapper[4853]: I0127 18:59:11.630943 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-czv8w" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.338182 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xkd2q-config-czv8w"] Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.351888 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xkd2q-config-czv8w"] Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.488068 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xkd2q-config-sl7sj"] Jan 27 18:59:12 crc kubenswrapper[4853]: E0127 18:59:12.488399 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1fd930-d712-4e54-be4d-2a30c3c7436d" containerName="mariadb-account-create-update" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.488416 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1fd930-d712-4e54-be4d-2a30c3c7436d" containerName="mariadb-account-create-update" Jan 27 18:59:12 crc kubenswrapper[4853]: E0127 18:59:12.488430 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0937f5-e0c4-4067-b80c-2e4188beb2a9" containerName="ovn-config" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.488436 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0937f5-e0c4-4067-b80c-2e4188beb2a9" containerName="ovn-config" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.488608 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0937f5-e0c4-4067-b80c-2e4188beb2a9" containerName="ovn-config" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.488625 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb1fd930-d712-4e54-be4d-2a30c3c7436d" containerName="mariadb-account-create-update" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.489163 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.491542 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.501585 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xkd2q-config-sl7sj"] Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.588059 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbtl\" (UniqueName: \"kubernetes.io/projected/629cf11a-7331-4de3-bd0b-71519649f0b6-kube-api-access-cmbtl\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.588162 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.588236 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-scripts\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.588581 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run-ovn\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.588681 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-additional-scripts\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.588749 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-log-ovn\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.628531 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xkd2q" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.652248 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"b88e8ec7504f591e870a08280fe81329efa505c6244f6568bdf764bb966acdfb"} Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.652304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"5bca1afc95a31f7a807cfa045b9b3002ed83bb38bccd5442c59e01a10154fc8b"} Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.652313 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"4e373df6254ea444e5443ebc872803c7e9c31e367084b02b1a164bc04b733884"} Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-scripts\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run-ovn\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722351 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-additional-scripts\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-log-ovn\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbtl\" (UniqueName: \"kubernetes.io/projected/629cf11a-7331-4de3-bd0b-71519649f0b6-kube-api-access-cmbtl\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.722991 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.724376 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run-ovn\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.725004 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-log-ovn\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.729364 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-additional-scripts\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.737151 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-scripts\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.744465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbtl\" (UniqueName: \"kubernetes.io/projected/629cf11a-7331-4de3-bd0b-71519649f0b6-kube-api-access-cmbtl\") pod \"ovn-controller-xkd2q-config-sl7sj\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:12 crc kubenswrapper[4853]: I0127 18:59:12.824319 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:13 crc kubenswrapper[4853]: I0127 18:59:13.402100 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xkd2q-config-sl7sj"] Jan 27 18:59:13 crc kubenswrapper[4853]: I0127 18:59:13.659992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q-config-sl7sj" event={"ID":"629cf11a-7331-4de3-bd0b-71519649f0b6","Type":"ContainerStarted","Data":"b0fcc8a6c9ce2fbead85bee8dc44edc9ae9b1a8d29c87ba8feb564cc86dae707"} Jan 27 18:59:13 crc kubenswrapper[4853]: I0127 18:59:13.664685 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"2a42f28d7b3556c7801b5b7467899c4c9a3f9722cad3613d37bd7fcdb6fec0fb"} Jan 27 18:59:13 crc kubenswrapper[4853]: I0127 18:59:13.664719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"a359f55ddf9ea0523deb319c318b7d2a8acd8534bf1bc2b8022df645e8a28bb6"} Jan 27 18:59:13 crc kubenswrapper[4853]: I0127 18:59:13.664728 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"7ad9a4cb406c90c18f08137959b9096b69daef6b42808f5f7bc00a11bd1aecfb"} Jan 27 18:59:13 crc kubenswrapper[4853]: I0127 18:59:13.995063 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.124509 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0937f5-e0c4-4067-b80c-2e4188beb2a9" path="/var/lib/kubelet/pods/2e0937f5-e0c4-4067-b80c-2e4188beb2a9/volumes" Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.676612 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1859766-1c8c-471c-bae5-4ae46086e8a5","Type":"ContainerStarted","Data":"c573436b9b82550e47881dbe30e99d48278af77587857e7be087d8665639d91c"} Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.678827 4853 generic.go:334] "Generic (PLEG): container finished" podID="629cf11a-7331-4de3-bd0b-71519649f0b6" containerID="bc103aaefc798f810fe57738521fc83593f035690bd59b57f63c73f3bbb18fff" exitCode=0 Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.678865 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q-config-sl7sj" event={"ID":"629cf11a-7331-4de3-bd0b-71519649f0b6","Type":"ContainerDied","Data":"bc103aaefc798f810fe57738521fc83593f035690bd59b57f63c73f3bbb18fff"} Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.715877 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.324587984 podStartE2EDuration="35.715854529s" podCreationTimestamp="2026-01-27 18:58:39 +0000 UTC" firstStartedPulling="2026-01-27 18:58:57.51017832 +0000 UTC m=+979.972721203" lastFinishedPulling="2026-01-27 18:59:11.901444865 +0000 UTC m=+994.363987748" observedRunningTime="2026-01-27 18:59:14.712064861 +0000 UTC m=+997.174607744" watchObservedRunningTime="2026-01-27 18:59:14.715854529 +0000 UTC m=+997.178397422" Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.985636 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4cnb5"] Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.987649 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:14 crc kubenswrapper[4853]: I0127 18:59:14.990189 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.032137 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4cnb5"] Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.068606 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwzx\" (UniqueName: \"kubernetes.io/projected/11a54861-b4fe-494f-92cf-4144b7decdde-kube-api-access-2pwzx\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.068679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.068758 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.068836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.068865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.068978 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-config\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.170328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.170406 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.170433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.170454 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.170557 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-config\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.170602 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pwzx\" (UniqueName: \"kubernetes.io/projected/11a54861-b4fe-494f-92cf-4144b7decdde-kube-api-access-2pwzx\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.171776 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.172461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.173025 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.173607 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.174385 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-config\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.203316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pwzx\" (UniqueName: \"kubernetes.io/projected/11a54861-b4fe-494f-92cf-4144b7decdde-kube-api-access-2pwzx\") pod \"dnsmasq-dns-764c5664d7-4cnb5\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.345786 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.689647 4853 generic.go:334] "Generic (PLEG): container finished" podID="525d82bf-e147-429f-8915-365aa48be00b" containerID="c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779" exitCode=0 Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.689727 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"525d82bf-e147-429f-8915-365aa48be00b","Type":"ContainerDied","Data":"c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779"} Jan 27 18:59:15 crc kubenswrapper[4853]: I0127 18:59:15.862782 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4cnb5"] Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.104134 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.205008 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run\") pod \"629cf11a-7331-4de3-bd0b-71519649f0b6\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.205427 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run-ovn\") pod \"629cf11a-7331-4de3-bd0b-71519649f0b6\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.205576 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmbtl\" (UniqueName: \"kubernetes.io/projected/629cf11a-7331-4de3-bd0b-71519649f0b6-kube-api-access-cmbtl\") pod \"629cf11a-7331-4de3-bd0b-71519649f0b6\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.205657 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-scripts\") pod \"629cf11a-7331-4de3-bd0b-71519649f0b6\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.205832 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-additional-scripts\") pod \"629cf11a-7331-4de3-bd0b-71519649f0b6\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.205928 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-log-ovn\") pod \"629cf11a-7331-4de3-bd0b-71519649f0b6\" (UID: \"629cf11a-7331-4de3-bd0b-71519649f0b6\") " Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.206323 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "629cf11a-7331-4de3-bd0b-71519649f0b6" (UID: "629cf11a-7331-4de3-bd0b-71519649f0b6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.206516 4853 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.207399 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "629cf11a-7331-4de3-bd0b-71519649f0b6" (UID: "629cf11a-7331-4de3-bd0b-71519649f0b6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.207526 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-scripts" (OuterVolumeSpecName: "scripts") pod "629cf11a-7331-4de3-bd0b-71519649f0b6" (UID: "629cf11a-7331-4de3-bd0b-71519649f0b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.207553 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "629cf11a-7331-4de3-bd0b-71519649f0b6" (UID: "629cf11a-7331-4de3-bd0b-71519649f0b6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.207583 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run" (OuterVolumeSpecName: "var-run") pod "629cf11a-7331-4de3-bd0b-71519649f0b6" (UID: "629cf11a-7331-4de3-bd0b-71519649f0b6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.217390 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629cf11a-7331-4de3-bd0b-71519649f0b6-kube-api-access-cmbtl" (OuterVolumeSpecName: "kube-api-access-cmbtl") pod "629cf11a-7331-4de3-bd0b-71519649f0b6" (UID: "629cf11a-7331-4de3-bd0b-71519649f0b6"). InnerVolumeSpecName "kube-api-access-cmbtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.308909 4853 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.308964 4853 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/629cf11a-7331-4de3-bd0b-71519649f0b6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.308979 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmbtl\" (UniqueName: \"kubernetes.io/projected/629cf11a-7331-4de3-bd0b-71519649f0b6-kube-api-access-cmbtl\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.309018 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.309030 4853 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/629cf11a-7331-4de3-bd0b-71519649f0b6-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.699308 4853 generic.go:334] "Generic (PLEG): container finished" podID="11a54861-b4fe-494f-92cf-4144b7decdde" containerID="c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157" exitCode=0 Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.699413 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" event={"ID":"11a54861-b4fe-494f-92cf-4144b7decdde","Type":"ContainerDied","Data":"c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157"} Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.699703 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" event={"ID":"11a54861-b4fe-494f-92cf-4144b7decdde","Type":"ContainerStarted","Data":"10a3dc1c516d6d2d746e71b0d28bd6e880139a3e7316dc2defe545d350f6780b"} Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.701625 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"525d82bf-e147-429f-8915-365aa48be00b","Type":"ContainerStarted","Data":"16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f"} Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.701878 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.703534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xkd2q-config-sl7sj" event={"ID":"629cf11a-7331-4de3-bd0b-71519649f0b6","Type":"ContainerDied","Data":"b0fcc8a6c9ce2fbead85bee8dc44edc9ae9b1a8d29c87ba8feb564cc86dae707"} Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.703648 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0fcc8a6c9ce2fbead85bee8dc44edc9ae9b1a8d29c87ba8feb564cc86dae707" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.703790 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xkd2q-config-sl7sj" Jan 27 18:59:16 crc kubenswrapper[4853]: I0127 18:59:16.766456 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371952.08834 podStartE2EDuration="1m24.76643571s" podCreationTimestamp="2026-01-27 18:57:52 +0000 UTC" firstStartedPulling="2026-01-27 18:57:54.950453794 +0000 UTC m=+917.412996677" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:16.754880582 +0000 UTC m=+999.217423495" watchObservedRunningTime="2026-01-27 18:59:16.76643571 +0000 UTC m=+999.228978603" Jan 27 18:59:17 crc kubenswrapper[4853]: I0127 18:59:17.228005 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xkd2q-config-sl7sj"] Jan 27 18:59:17 crc kubenswrapper[4853]: I0127 18:59:17.240665 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xkd2q-config-sl7sj"] Jan 27 18:59:17 crc kubenswrapper[4853]: I0127 18:59:17.711392 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" event={"ID":"11a54861-b4fe-494f-92cf-4144b7decdde","Type":"ContainerStarted","Data":"d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167"} Jan 27 18:59:17 crc kubenswrapper[4853]: I0127 18:59:17.732243 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" podStartSLOduration=3.732222147 podStartE2EDuration="3.732222147s" podCreationTimestamp="2026-01-27 18:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:17.731048144 +0000 UTC m=+1000.193591027" watchObservedRunningTime="2026-01-27 18:59:17.732222147 +0000 UTC m=+1000.194765030" Jan 27 18:59:18 crc kubenswrapper[4853]: I0127 18:59:18.121198 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629cf11a-7331-4de3-bd0b-71519649f0b6" path="/var/lib/kubelet/pods/629cf11a-7331-4de3-bd0b-71519649f0b6/volumes" Jan 27 18:59:18 crc kubenswrapper[4853]: I0127 18:59:18.717533 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:19 crc kubenswrapper[4853]: I0127 18:59:19.730306 4853 generic.go:334] "Generic (PLEG): container finished" podID="1715d18d-b411-407d-9b52-d7b0bbd850f4" containerID="d0e7ae2912174648fc97eb8d5d5d8b251b773709670a0698a55f108ee079f000" exitCode=0 Jan 27 18:59:19 crc kubenswrapper[4853]: I0127 18:59:19.730395 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bpr58" event={"ID":"1715d18d-b411-407d-9b52-d7b0bbd850f4","Type":"ContainerDied","Data":"d0e7ae2912174648fc97eb8d5d5d8b251b773709670a0698a55f108ee079f000"} Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.154200 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bpr58" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.290651 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj9lm\" (UniqueName: \"kubernetes.io/projected/1715d18d-b411-407d-9b52-d7b0bbd850f4-kube-api-access-wj9lm\") pod \"1715d18d-b411-407d-9b52-d7b0bbd850f4\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.290740 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-config-data\") pod \"1715d18d-b411-407d-9b52-d7b0bbd850f4\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.290830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-combined-ca-bundle\") pod \"1715d18d-b411-407d-9b52-d7b0bbd850f4\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.290871 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-db-sync-config-data\") pod \"1715d18d-b411-407d-9b52-d7b0bbd850f4\" (UID: \"1715d18d-b411-407d-9b52-d7b0bbd850f4\") " Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.296235 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1715d18d-b411-407d-9b52-d7b0bbd850f4" (UID: "1715d18d-b411-407d-9b52-d7b0bbd850f4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.296362 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1715d18d-b411-407d-9b52-d7b0bbd850f4-kube-api-access-wj9lm" (OuterVolumeSpecName: "kube-api-access-wj9lm") pod "1715d18d-b411-407d-9b52-d7b0bbd850f4" (UID: "1715d18d-b411-407d-9b52-d7b0bbd850f4"). InnerVolumeSpecName "kube-api-access-wj9lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.314251 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1715d18d-b411-407d-9b52-d7b0bbd850f4" (UID: "1715d18d-b411-407d-9b52-d7b0bbd850f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.331945 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-config-data" (OuterVolumeSpecName: "config-data") pod "1715d18d-b411-407d-9b52-d7b0bbd850f4" (UID: "1715d18d-b411-407d-9b52-d7b0bbd850f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.393097 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj9lm\" (UniqueName: \"kubernetes.io/projected/1715d18d-b411-407d-9b52-d7b0bbd850f4-kube-api-access-wj9lm\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.393158 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.393169 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.393178 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1715d18d-b411-407d-9b52-d7b0bbd850f4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.748773 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bpr58" event={"ID":"1715d18d-b411-407d-9b52-d7b0bbd850f4","Type":"ContainerDied","Data":"c1b25e6aab80600ae80247d3d9ce7aa9357ad0d50128e5dc0a1740b47689a7f5"} Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.748817 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b25e6aab80600ae80247d3d9ce7aa9357ad0d50128e5dc0a1740b47689a7f5" Jan 27 18:59:21 crc kubenswrapper[4853]: I0127 18:59:21.748853 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bpr58" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.218412 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4cnb5"] Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.219049 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" containerName="dnsmasq-dns" containerID="cri-o://d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167" gracePeriod=10 Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.221315 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.245539 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8qfz"] Jan 27 18:59:22 crc kubenswrapper[4853]: E0127 18:59:22.246179 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1715d18d-b411-407d-9b52-d7b0bbd850f4" containerName="glance-db-sync" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.246277 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1715d18d-b411-407d-9b52-d7b0bbd850f4" containerName="glance-db-sync" Jan 27 18:59:22 crc kubenswrapper[4853]: E0127 18:59:22.246368 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629cf11a-7331-4de3-bd0b-71519649f0b6" containerName="ovn-config" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.246463 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="629cf11a-7331-4de3-bd0b-71519649f0b6" containerName="ovn-config" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.246773 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="629cf11a-7331-4de3-bd0b-71519649f0b6" containerName="ovn-config" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.246881 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1715d18d-b411-407d-9b52-d7b0bbd850f4" containerName="glance-db-sync" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.248342 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.257433 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8qfz"] Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.313053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.313490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-config\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.313594 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.313746 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8k4d\" (UniqueName: \"kubernetes.io/projected/7922b820-92c1-46d8-a5a0-6f58e05674b5-kube-api-access-m8k4d\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.313917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.314031 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.415320 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-config\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.415385 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.415431 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8k4d\" (UniqueName: \"kubernetes.io/projected/7922b820-92c1-46d8-a5a0-6f58e05674b5-kube-api-access-m8k4d\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.415503 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.415525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.415590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.416728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.417415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-config\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.417977 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.418265 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.418283 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.441543 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8k4d\" (UniqueName: \"kubernetes.io/projected/7922b820-92c1-46d8-a5a0-6f58e05674b5-kube-api-access-m8k4d\") pod \"dnsmasq-dns-74f6bcbc87-h8qfz\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.571282 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.737652 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.767076 4853 generic.go:334] "Generic (PLEG): container finished" podID="11a54861-b4fe-494f-92cf-4144b7decdde" containerID="d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167" exitCode=0 Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.767171 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" event={"ID":"11a54861-b4fe-494f-92cf-4144b7decdde","Type":"ContainerDied","Data":"d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167"} Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.767207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" event={"ID":"11a54861-b4fe-494f-92cf-4144b7decdde","Type":"ContainerDied","Data":"10a3dc1c516d6d2d746e71b0d28bd6e880139a3e7316dc2defe545d350f6780b"} Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.767228 4853 scope.go:117] "RemoveContainer" containerID="d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.767376 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4cnb5" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.831835 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-sb\") pod \"11a54861-b4fe-494f-92cf-4144b7decdde\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.831943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-swift-storage-0\") pod \"11a54861-b4fe-494f-92cf-4144b7decdde\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.832031 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-config\") pod \"11a54861-b4fe-494f-92cf-4144b7decdde\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.832100 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pwzx\" (UniqueName: \"kubernetes.io/projected/11a54861-b4fe-494f-92cf-4144b7decdde-kube-api-access-2pwzx\") pod \"11a54861-b4fe-494f-92cf-4144b7decdde\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.832134 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-svc\") pod \"11a54861-b4fe-494f-92cf-4144b7decdde\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.832152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-nb\") pod \"11a54861-b4fe-494f-92cf-4144b7decdde\" (UID: \"11a54861-b4fe-494f-92cf-4144b7decdde\") " Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.837009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a54861-b4fe-494f-92cf-4144b7decdde-kube-api-access-2pwzx" (OuterVolumeSpecName: "kube-api-access-2pwzx") pod "11a54861-b4fe-494f-92cf-4144b7decdde" (UID: "11a54861-b4fe-494f-92cf-4144b7decdde"). InnerVolumeSpecName "kube-api-access-2pwzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.875186 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11a54861-b4fe-494f-92cf-4144b7decdde" (UID: "11a54861-b4fe-494f-92cf-4144b7decdde"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.876651 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11a54861-b4fe-494f-92cf-4144b7decdde" (UID: "11a54861-b4fe-494f-92cf-4144b7decdde"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.886267 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11a54861-b4fe-494f-92cf-4144b7decdde" (UID: "11a54861-b4fe-494f-92cf-4144b7decdde"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.889247 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-config" (OuterVolumeSpecName: "config") pod "11a54861-b4fe-494f-92cf-4144b7decdde" (UID: "11a54861-b4fe-494f-92cf-4144b7decdde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.897690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "11a54861-b4fe-494f-92cf-4144b7decdde" (UID: "11a54861-b4fe-494f-92cf-4144b7decdde"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.934510 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.934557 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pwzx\" (UniqueName: \"kubernetes.io/projected/11a54861-b4fe-494f-92cf-4144b7decdde-kube-api-access-2pwzx\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.934579 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.934592 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.934603 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:22 crc kubenswrapper[4853]: I0127 18:59:22.934615 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11a54861-b4fe-494f-92cf-4144b7decdde-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:23 crc kubenswrapper[4853]: I0127 18:59:23.061665 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8qfz"] Jan 27 18:59:23 crc kubenswrapper[4853]: I0127 18:59:23.097218 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4cnb5"] Jan 27 18:59:23 crc kubenswrapper[4853]: I0127 18:59:23.104898 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4cnb5"] Jan 27 18:59:23 crc kubenswrapper[4853]: I0127 18:59:23.998161 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.125435 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" path="/var/lib/kubelet/pods/11a54861-b4fe-494f-92cf-4144b7decdde/volumes" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.346171 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-tnf7r"] Jan 27 18:59:24 crc kubenswrapper[4853]: E0127 18:59:24.346610 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" containerName="dnsmasq-dns" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.346631 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" containerName="dnsmasq-dns" Jan 27 18:59:24 crc kubenswrapper[4853]: E0127 18:59:24.346651 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" containerName="init" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.346660 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" containerName="init" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.346868 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a54861-b4fe-494f-92cf-4144b7decdde" containerName="dnsmasq-dns" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.347538 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.356663 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tnf7r"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.439593 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9z979"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.440644 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.457451 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed915364-aeae-47d1-9cc2-b9e14ce361e7-operator-scripts\") pod \"cinder-db-create-tnf7r\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.457526 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kthft\" (UniqueName: \"kubernetes.io/projected/ed915364-aeae-47d1-9cc2-b9e14ce361e7-kube-api-access-kthft\") pod \"cinder-db-create-tnf7r\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.486792 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9z979"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.556274 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-219d-account-create-update-5qm8g"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.557405 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.558543 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce113b00-48db-4a7d-b15a-fafe0a932311-operator-scripts\") pod \"barbican-db-create-9z979\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.558630 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed915364-aeae-47d1-9cc2-b9e14ce361e7-operator-scripts\") pod \"cinder-db-create-tnf7r\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.558813 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kthft\" (UniqueName: \"kubernetes.io/projected/ed915364-aeae-47d1-9cc2-b9e14ce361e7-kube-api-access-kthft\") pod \"cinder-db-create-tnf7r\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.558921 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrsft\" (UniqueName: \"kubernetes.io/projected/ce113b00-48db-4a7d-b15a-fafe0a932311-kube-api-access-qrsft\") pod \"barbican-db-create-9z979\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.559242 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.559403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed915364-aeae-47d1-9cc2-b9e14ce361e7-operator-scripts\") pod \"cinder-db-create-tnf7r\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.573348 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-219d-account-create-update-5qm8g"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.601281 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kthft\" (UniqueName: \"kubernetes.io/projected/ed915364-aeae-47d1-9cc2-b9e14ce361e7-kube-api-access-kthft\") pod \"cinder-db-create-tnf7r\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.630510 4853 scope.go:117] "RemoveContainer" containerID="c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.662828 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.666253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrsft\" (UniqueName: \"kubernetes.io/projected/ce113b00-48db-4a7d-b15a-fafe0a932311-kube-api-access-qrsft\") pod \"barbican-db-create-9z979\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.666309 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtcs\" (UniqueName: \"kubernetes.io/projected/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-kube-api-access-bqtcs\") pod \"cinder-219d-account-create-update-5qm8g\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.666346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-operator-scripts\") pod \"cinder-219d-account-create-update-5qm8g\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.666389 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce113b00-48db-4a7d-b15a-fafe0a932311-operator-scripts\") pod \"barbican-db-create-9z979\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.667830 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce113b00-48db-4a7d-b15a-fafe0a932311-operator-scripts\") pod \"barbican-db-create-9z979\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.677251 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2pwl2"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.687247 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.700563 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrsft\" (UniqueName: \"kubernetes.io/projected/ce113b00-48db-4a7d-b15a-fafe0a932311-kube-api-access-qrsft\") pod \"barbican-db-create-9z979\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.701042 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kgs4m" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.705280 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2pwl2"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.706689 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.706890 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.707027 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.708600 4853 scope.go:117] "RemoveContainer" containerID="d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167" Jan 27 18:59:24 crc kubenswrapper[4853]: E0127 18:59:24.709336 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167\": container with ID starting with d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167 not found: ID does not exist" containerID="d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.709403 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167"} err="failed to get container status \"d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167\": rpc error: code = NotFound desc = could not find container \"d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167\": container with ID starting with d47c4325301d6fa9f879d39577ae782d6928bebc066eea3a2d3bc24e104af167 not found: ID does not exist" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.709490 4853 scope.go:117] "RemoveContainer" containerID="c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157" Jan 27 18:59:24 crc kubenswrapper[4853]: E0127 18:59:24.709934 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157\": container with ID starting with c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157 not found: ID does not exist" containerID="c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.709989 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157"} err="failed to get container status \"c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157\": rpc error: code = NotFound desc = could not find container \"c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157\": container with ID starting with c21e9a0e83694ce373e250d179a45a2d563aa456d1ad938e4ecc12f4bcca5157 not found: ID does not exist" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.716888 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-74qwr"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.718085 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.731847 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3de5-account-create-update-dnzb9"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.733833 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.739833 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.756618 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3de5-account-create-update-dnzb9"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768015 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f2528-c8dc-4170-b594-94945759de99-operator-scripts\") pod \"neutron-db-create-74qwr\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768062 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnl7x\" (UniqueName: \"kubernetes.io/projected/ca0f2528-c8dc-4170-b594-94945759de99-kube-api-access-lnl7x\") pod \"neutron-db-create-74qwr\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-combined-ca-bundle\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768155 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtcs\" (UniqueName: \"kubernetes.io/projected/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-kube-api-access-bqtcs\") pod \"cinder-219d-account-create-update-5qm8g\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-operator-scripts\") pod \"cinder-219d-account-create-update-5qm8g\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768213 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8h2\" (UniqueName: \"kubernetes.io/projected/0d107577-39f6-4463-80b0-374fc14e89e8-kube-api-access-gl8h2\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.768252 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-config-data\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.769314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-operator-scripts\") pod \"cinder-219d-account-create-update-5qm8g\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.776938 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-74qwr"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.792010 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9z979" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.795523 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtcs\" (UniqueName: \"kubernetes.io/projected/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-kube-api-access-bqtcs\") pod \"cinder-219d-account-create-update-5qm8g\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.800737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" event={"ID":"7922b820-92c1-46d8-a5a0-6f58e05674b5","Type":"ContainerStarted","Data":"3daa75f666886a5113d69dcf3f3874071d73397973416e61f132d0a8ad84ef17"} Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870040 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-combined-ca-bundle\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870110 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvn9\" (UniqueName: \"kubernetes.io/projected/8f709328-85ea-42f5-8d5b-d302b907bee3-kube-api-access-zsvn9\") pod \"barbican-3de5-account-create-update-dnzb9\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870202 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8h2\" (UniqueName: \"kubernetes.io/projected/0d107577-39f6-4463-80b0-374fc14e89e8-kube-api-access-gl8h2\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870220 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f709328-85ea-42f5-8d5b-d302b907bee3-operator-scripts\") pod \"barbican-3de5-account-create-update-dnzb9\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870264 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-config-data\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870305 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f2528-c8dc-4170-b594-94945759de99-operator-scripts\") pod \"neutron-db-create-74qwr\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.870323 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnl7x\" (UniqueName: \"kubernetes.io/projected/ca0f2528-c8dc-4170-b594-94945759de99-kube-api-access-lnl7x\") pod \"neutron-db-create-74qwr\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.871653 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f2528-c8dc-4170-b594-94945759de99-operator-scripts\") pod \"neutron-db-create-74qwr\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.872030 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.875717 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-config-data\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.877147 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-combined-ca-bundle\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.893704 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8h2\" (UniqueName: \"kubernetes.io/projected/0d107577-39f6-4463-80b0-374fc14e89e8-kube-api-access-gl8h2\") pod \"keystone-db-sync-2pwl2\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.897517 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnl7x\" (UniqueName: \"kubernetes.io/projected/ca0f2528-c8dc-4170-b594-94945759de99-kube-api-access-lnl7x\") pod \"neutron-db-create-74qwr\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.960228 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8c34-account-create-update-mrws4"] Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.964323 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.973650 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.974335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvn9\" (UniqueName: \"kubernetes.io/projected/8f709328-85ea-42f5-8d5b-d302b907bee3-kube-api-access-zsvn9\") pod \"barbican-3de5-account-create-update-dnzb9\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.974432 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f709328-85ea-42f5-8d5b-d302b907bee3-operator-scripts\") pod \"barbican-3de5-account-create-update-dnzb9\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.975528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f709328-85ea-42f5-8d5b-d302b907bee3-operator-scripts\") pod \"barbican-3de5-account-create-update-dnzb9\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:24 crc kubenswrapper[4853]: I0127 18:59:24.988132 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c34-account-create-update-mrws4"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.018023 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvn9\" (UniqueName: \"kubernetes.io/projected/8f709328-85ea-42f5-8d5b-d302b907bee3-kube-api-access-zsvn9\") pod \"barbican-3de5-account-create-update-dnzb9\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.076055 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-556ss\" (UniqueName: \"kubernetes.io/projected/3d31d088-3d59-4f93-bd69-5c656de66500-kube-api-access-556ss\") pod \"neutron-8c34-account-create-update-mrws4\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.076222 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d31d088-3d59-4f93-bd69-5c656de66500-operator-scripts\") pod \"neutron-8c34-account-create-update-mrws4\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.101618 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.115215 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.135430 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.177792 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d31d088-3d59-4f93-bd69-5c656de66500-operator-scripts\") pod \"neutron-8c34-account-create-update-mrws4\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.177927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-556ss\" (UniqueName: \"kubernetes.io/projected/3d31d088-3d59-4f93-bd69-5c656de66500-kube-api-access-556ss\") pod \"neutron-8c34-account-create-update-mrws4\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.178762 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d31d088-3d59-4f93-bd69-5c656de66500-operator-scripts\") pod \"neutron-8c34-account-create-update-mrws4\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.212394 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-556ss\" (UniqueName: \"kubernetes.io/projected/3d31d088-3d59-4f93-bd69-5c656de66500-kube-api-access-556ss\") pod \"neutron-8c34-account-create-update-mrws4\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.292554 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.395330 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tnf7r"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.401619 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9z979"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.419457 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-219d-account-create-update-5qm8g"] Jan 27 18:59:25 crc kubenswrapper[4853]: W0127 18:59:25.424239 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce113b00_48db_4a7d_b15a_fafe0a932311.slice/crio-9d4c50f038643b194c24e165e3da18b7763c4463b8ec70a1e9bfa864abfb640e WatchSource:0}: Error finding container 9d4c50f038643b194c24e165e3da18b7763c4463b8ec70a1e9bfa864abfb640e: Status 404 returned error can't find the container with id 9d4c50f038643b194c24e165e3da18b7763c4463b8ec70a1e9bfa864abfb640e Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.626768 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-74qwr"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.725252 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2pwl2"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.783189 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c34-account-create-update-mrws4"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.812882 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3de5-account-create-update-dnzb9"] Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.846914 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9z979" event={"ID":"ce113b00-48db-4a7d-b15a-fafe0a932311","Type":"ContainerStarted","Data":"9d4c50f038643b194c24e165e3da18b7763c4463b8ec70a1e9bfa864abfb640e"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.849294 4853 generic.go:334] "Generic (PLEG): container finished" podID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerID="f663b0207f40dfac7950dc5d929690911d398ea74cc5358b75e1ec1e77b5c37b" exitCode=0 Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.849340 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" event={"ID":"7922b820-92c1-46d8-a5a0-6f58e05674b5","Type":"ContainerDied","Data":"f663b0207f40dfac7950dc5d929690911d398ea74cc5358b75e1ec1e77b5c37b"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.854848 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tnf7r" event={"ID":"ed915364-aeae-47d1-9cc2-b9e14ce361e7","Type":"ContainerStarted","Data":"f064794107a6f837a966c497311dcc5f7dccf9701e2c817eb7a8078749e2d4e8"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.857872 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pwl2" event={"ID":"0d107577-39f6-4463-80b0-374fc14e89e8","Type":"ContainerStarted","Data":"a87d639d2a23f35d258d052827b86c5a8af182cf5b71f118ae94a0bd3d77c3ec"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.863889 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-74qwr" event={"ID":"ca0f2528-c8dc-4170-b594-94945759de99","Type":"ContainerStarted","Data":"db58cc8f0e80aab3ae0d3854cfd80cd055662fef283dc0542d2831c9d7bb5e0c"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.867075 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219d-account-create-update-5qm8g" event={"ID":"15035d73-a434-4ff6-9ec0-ecdf17c78d5d","Type":"ContainerStarted","Data":"324972faa9f884817eb56c41d9903a1f462a8f87b8c7bdc2c90efba6f4a3ca77"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.867104 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219d-account-create-update-5qm8g" event={"ID":"15035d73-a434-4ff6-9ec0-ecdf17c78d5d","Type":"ContainerStarted","Data":"8f3b2b91f900ab24bd61494b059a64d19ea1360b0586e83b38bd2a90f4251f7b"} Jan 27 18:59:25 crc kubenswrapper[4853]: I0127 18:59:25.894319 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-219d-account-create-update-5qm8g" podStartSLOduration=1.894298188 podStartE2EDuration="1.894298188s" podCreationTimestamp="2026-01-27 18:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:25.888051161 +0000 UTC m=+1008.350594044" watchObservedRunningTime="2026-01-27 18:59:25.894298188 +0000 UTC m=+1008.356841071" Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.880595 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce113b00-48db-4a7d-b15a-fafe0a932311" containerID="a61ddbb1693f2925428b27552e31b531d897249ecdc1762a39ad6bbc9f380d5d" exitCode=0 Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.880683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9z979" event={"ID":"ce113b00-48db-4a7d-b15a-fafe0a932311","Type":"ContainerDied","Data":"a61ddbb1693f2925428b27552e31b531d897249ecdc1762a39ad6bbc9f380d5d"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.886814 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" event={"ID":"7922b820-92c1-46d8-a5a0-6f58e05674b5","Type":"ContainerStarted","Data":"18efbdd72c9acab9bb286b14193bfa5e0ac7ffc5bc930f0495a2906166d13a1e"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.886906 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.888764 4853 generic.go:334] "Generic (PLEG): container finished" podID="ed915364-aeae-47d1-9cc2-b9e14ce361e7" containerID="7977e4339b32d6f810f532e9e707fe5b75af47a06d4dfa33f7dba4b304dc2ccb" exitCode=0 Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.888913 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tnf7r" event={"ID":"ed915364-aeae-47d1-9cc2-b9e14ce361e7","Type":"ContainerDied","Data":"7977e4339b32d6f810f532e9e707fe5b75af47a06d4dfa33f7dba4b304dc2ccb"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.890602 4853 generic.go:334] "Generic (PLEG): container finished" podID="8f709328-85ea-42f5-8d5b-d302b907bee3" containerID="ab2dfdc4e62727303feef23135af719101525667d4c3d0f07869b9d505457de4" exitCode=0 Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.890647 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3de5-account-create-update-dnzb9" event={"ID":"8f709328-85ea-42f5-8d5b-d302b907bee3","Type":"ContainerDied","Data":"ab2dfdc4e62727303feef23135af719101525667d4c3d0f07869b9d505457de4"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.890729 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3de5-account-create-update-dnzb9" event={"ID":"8f709328-85ea-42f5-8d5b-d302b907bee3","Type":"ContainerStarted","Data":"c7518c546990d31bd1347792d868486333d4e87b7b93326326078bb2bf5210d1"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.895095 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d31d088-3d59-4f93-bd69-5c656de66500" containerID="e27540d77a4365a2d956743cf62b9896a8be95c214116ad9409a586b4f68e375" exitCode=0 Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.895385 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c34-account-create-update-mrws4" event={"ID":"3d31d088-3d59-4f93-bd69-5c656de66500","Type":"ContainerDied","Data":"e27540d77a4365a2d956743cf62b9896a8be95c214116ad9409a586b4f68e375"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.895438 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c34-account-create-update-mrws4" event={"ID":"3d31d088-3d59-4f93-bd69-5c656de66500","Type":"ContainerStarted","Data":"0e72cc7097affd58e3a5e1bd7d9206c833c9e3e4700a9ab72bd5bc0fbf1c8233"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.896931 4853 generic.go:334] "Generic (PLEG): container finished" podID="ca0f2528-c8dc-4170-b594-94945759de99" containerID="a0a9992d6dddcf961c34e75bb4eb48ef1eda0ccbd08b79240ae3c8208b4b7c55" exitCode=0 Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.897072 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-74qwr" event={"ID":"ca0f2528-c8dc-4170-b594-94945759de99","Type":"ContainerDied","Data":"a0a9992d6dddcf961c34e75bb4eb48ef1eda0ccbd08b79240ae3c8208b4b7c55"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.902126 4853 generic.go:334] "Generic (PLEG): container finished" podID="15035d73-a434-4ff6-9ec0-ecdf17c78d5d" containerID="324972faa9f884817eb56c41d9903a1f462a8f87b8c7bdc2c90efba6f4a3ca77" exitCode=0 Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.902284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219d-account-create-update-5qm8g" event={"ID":"15035d73-a434-4ff6-9ec0-ecdf17c78d5d","Type":"ContainerDied","Data":"324972faa9f884817eb56c41d9903a1f462a8f87b8c7bdc2c90efba6f4a3ca77"} Jan 27 18:59:26 crc kubenswrapper[4853]: I0127 18:59:26.922363 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" podStartSLOduration=4.922344442 podStartE2EDuration="4.922344442s" podCreationTimestamp="2026-01-27 18:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:26.92192154 +0000 UTC m=+1009.384464423" watchObservedRunningTime="2026-01-27 18:59:26.922344442 +0000 UTC m=+1009.384887325" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.618647 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.625052 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.641954 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9z979" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.693311 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.696297 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d31d088-3d59-4f93-bd69-5c656de66500-operator-scripts\") pod \"3d31d088-3d59-4f93-bd69-5c656de66500\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.696394 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f2528-c8dc-4170-b594-94945759de99-operator-scripts\") pod \"ca0f2528-c8dc-4170-b594-94945759de99\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.696425 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-556ss\" (UniqueName: \"kubernetes.io/projected/3d31d088-3d59-4f93-bd69-5c656de66500-kube-api-access-556ss\") pod \"3d31d088-3d59-4f93-bd69-5c656de66500\" (UID: \"3d31d088-3d59-4f93-bd69-5c656de66500\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.696497 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce113b00-48db-4a7d-b15a-fafe0a932311-operator-scripts\") pod \"ce113b00-48db-4a7d-b15a-fafe0a932311\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.696583 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrsft\" (UniqueName: \"kubernetes.io/projected/ce113b00-48db-4a7d-b15a-fafe0a932311-kube-api-access-qrsft\") pod \"ce113b00-48db-4a7d-b15a-fafe0a932311\" (UID: \"ce113b00-48db-4a7d-b15a-fafe0a932311\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.696615 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnl7x\" (UniqueName: \"kubernetes.io/projected/ca0f2528-c8dc-4170-b594-94945759de99-kube-api-access-lnl7x\") pod \"ca0f2528-c8dc-4170-b594-94945759de99\" (UID: \"ca0f2528-c8dc-4170-b594-94945759de99\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.698042 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce113b00-48db-4a7d-b15a-fafe0a932311-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce113b00-48db-4a7d-b15a-fafe0a932311" (UID: "ce113b00-48db-4a7d-b15a-fafe0a932311"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.698056 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d31d088-3d59-4f93-bd69-5c656de66500-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d31d088-3d59-4f93-bd69-5c656de66500" (UID: "3d31d088-3d59-4f93-bd69-5c656de66500"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.698738 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0f2528-c8dc-4170-b594-94945759de99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca0f2528-c8dc-4170-b594-94945759de99" (UID: "ca0f2528-c8dc-4170-b594-94945759de99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.704501 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0f2528-c8dc-4170-b594-94945759de99-kube-api-access-lnl7x" (OuterVolumeSpecName: "kube-api-access-lnl7x") pod "ca0f2528-c8dc-4170-b594-94945759de99" (UID: "ca0f2528-c8dc-4170-b594-94945759de99"). InnerVolumeSpecName "kube-api-access-lnl7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.704569 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d31d088-3d59-4f93-bd69-5c656de66500-kube-api-access-556ss" (OuterVolumeSpecName: "kube-api-access-556ss") pod "3d31d088-3d59-4f93-bd69-5c656de66500" (UID: "3d31d088-3d59-4f93-bd69-5c656de66500"). InnerVolumeSpecName "kube-api-access-556ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.718233 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce113b00-48db-4a7d-b15a-fafe0a932311-kube-api-access-qrsft" (OuterVolumeSpecName: "kube-api-access-qrsft") pod "ce113b00-48db-4a7d-b15a-fafe0a932311" (UID: "ce113b00-48db-4a7d-b15a-fafe0a932311"). InnerVolumeSpecName "kube-api-access-qrsft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.758566 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.770662 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqtcs\" (UniqueName: \"kubernetes.io/projected/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-kube-api-access-bqtcs\") pod \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798416 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f709328-85ea-42f5-8d5b-d302b907bee3-operator-scripts\") pod \"8f709328-85ea-42f5-8d5b-d302b907bee3\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-operator-scripts\") pod \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\" (UID: \"15035d73-a434-4ff6-9ec0-ecdf17c78d5d\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798536 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsvn9\" (UniqueName: \"kubernetes.io/projected/8f709328-85ea-42f5-8d5b-d302b907bee3-kube-api-access-zsvn9\") pod \"8f709328-85ea-42f5-8d5b-d302b907bee3\" (UID: \"8f709328-85ea-42f5-8d5b-d302b907bee3\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798871 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f2528-c8dc-4170-b594-94945759de99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798885 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-556ss\" (UniqueName: \"kubernetes.io/projected/3d31d088-3d59-4f93-bd69-5c656de66500-kube-api-access-556ss\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798896 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce113b00-48db-4a7d-b15a-fafe0a932311-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798905 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrsft\" (UniqueName: \"kubernetes.io/projected/ce113b00-48db-4a7d-b15a-fafe0a932311-kube-api-access-qrsft\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798916 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnl7x\" (UniqueName: \"kubernetes.io/projected/ca0f2528-c8dc-4170-b594-94945759de99-kube-api-access-lnl7x\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.798928 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d31d088-3d59-4f93-bd69-5c656de66500-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.799144 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f709328-85ea-42f5-8d5b-d302b907bee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f709328-85ea-42f5-8d5b-d302b907bee3" (UID: "8f709328-85ea-42f5-8d5b-d302b907bee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.799140 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15035d73-a434-4ff6-9ec0-ecdf17c78d5d" (UID: "15035d73-a434-4ff6-9ec0-ecdf17c78d5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.803082 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f709328-85ea-42f5-8d5b-d302b907bee3-kube-api-access-zsvn9" (OuterVolumeSpecName: "kube-api-access-zsvn9") pod "8f709328-85ea-42f5-8d5b-d302b907bee3" (UID: "8f709328-85ea-42f5-8d5b-d302b907bee3"). InnerVolumeSpecName "kube-api-access-zsvn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.806927 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-kube-api-access-bqtcs" (OuterVolumeSpecName: "kube-api-access-bqtcs") pod "15035d73-a434-4ff6-9ec0-ecdf17c78d5d" (UID: "15035d73-a434-4ff6-9ec0-ecdf17c78d5d"). InnerVolumeSpecName "kube-api-access-bqtcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.899749 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kthft\" (UniqueName: \"kubernetes.io/projected/ed915364-aeae-47d1-9cc2-b9e14ce361e7-kube-api-access-kthft\") pod \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.899831 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed915364-aeae-47d1-9cc2-b9e14ce361e7-operator-scripts\") pod \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\" (UID: \"ed915364-aeae-47d1-9cc2-b9e14ce361e7\") " Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.900480 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed915364-aeae-47d1-9cc2-b9e14ce361e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed915364-aeae-47d1-9cc2-b9e14ce361e7" (UID: "ed915364-aeae-47d1-9cc2-b9e14ce361e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.900656 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed915364-aeae-47d1-9cc2-b9e14ce361e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.900680 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsvn9\" (UniqueName: \"kubernetes.io/projected/8f709328-85ea-42f5-8d5b-d302b907bee3-kube-api-access-zsvn9\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.900690 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqtcs\" (UniqueName: \"kubernetes.io/projected/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-kube-api-access-bqtcs\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.900699 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f709328-85ea-42f5-8d5b-d302b907bee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.900737 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15035d73-a434-4ff6-9ec0-ecdf17c78d5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.903515 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed915364-aeae-47d1-9cc2-b9e14ce361e7-kube-api-access-kthft" (OuterVolumeSpecName: "kube-api-access-kthft") pod "ed915364-aeae-47d1-9cc2-b9e14ce361e7" (UID: "ed915364-aeae-47d1-9cc2-b9e14ce361e7"). InnerVolumeSpecName "kube-api-access-kthft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.938475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tnf7r" event={"ID":"ed915364-aeae-47d1-9cc2-b9e14ce361e7","Type":"ContainerDied","Data":"f064794107a6f837a966c497311dcc5f7dccf9701e2c817eb7a8078749e2d4e8"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.938506 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tnf7r" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.938531 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f064794107a6f837a966c497311dcc5f7dccf9701e2c817eb7a8078749e2d4e8" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.939965 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3de5-account-create-update-dnzb9" event={"ID":"8f709328-85ea-42f5-8d5b-d302b907bee3","Type":"ContainerDied","Data":"c7518c546990d31bd1347792d868486333d4e87b7b93326326078bb2bf5210d1"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.940002 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7518c546990d31bd1347792d868486333d4e87b7b93326326078bb2bf5210d1" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.939978 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3de5-account-create-update-dnzb9" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.946881 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c34-account-create-update-mrws4" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.946866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c34-account-create-update-mrws4" event={"ID":"3d31d088-3d59-4f93-bd69-5c656de66500","Type":"ContainerDied","Data":"0e72cc7097affd58e3a5e1bd7d9206c833c9e3e4700a9ab72bd5bc0fbf1c8233"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.947074 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e72cc7097affd58e3a5e1bd7d9206c833c9e3e4700a9ab72bd5bc0fbf1c8233" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.949830 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pwl2" event={"ID":"0d107577-39f6-4463-80b0-374fc14e89e8","Type":"ContainerStarted","Data":"e7ef370f5c0148d0a484beb9a0a87fcffadf5e7c6eee89e34ce7f539d156a007"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.953018 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-74qwr" event={"ID":"ca0f2528-c8dc-4170-b594-94945759de99","Type":"ContainerDied","Data":"db58cc8f0e80aab3ae0d3854cfd80cd055662fef283dc0542d2831c9d7bb5e0c"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.953073 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db58cc8f0e80aab3ae0d3854cfd80cd055662fef283dc0542d2831c9d7bb5e0c" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.953035 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74qwr" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.957409 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219d-account-create-update-5qm8g" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.957401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219d-account-create-update-5qm8g" event={"ID":"15035d73-a434-4ff6-9ec0-ecdf17c78d5d","Type":"ContainerDied","Data":"8f3b2b91f900ab24bd61494b059a64d19ea1360b0586e83b38bd2a90f4251f7b"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.957588 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f3b2b91f900ab24bd61494b059a64d19ea1360b0586e83b38bd2a90f4251f7b" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.959135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9z979" event={"ID":"ce113b00-48db-4a7d-b15a-fafe0a932311","Type":"ContainerDied","Data":"9d4c50f038643b194c24e165e3da18b7763c4463b8ec70a1e9bfa864abfb640e"} Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.959263 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d4c50f038643b194c24e165e3da18b7763c4463b8ec70a1e9bfa864abfb640e" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.959208 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9z979" Jan 27 18:59:30 crc kubenswrapper[4853]: I0127 18:59:30.982740 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2pwl2" podStartSLOduration=2.3233821519999998 podStartE2EDuration="6.982707736s" podCreationTimestamp="2026-01-27 18:59:24 +0000 UTC" firstStartedPulling="2026-01-27 18:59:25.821074362 +0000 UTC m=+1008.283617235" lastFinishedPulling="2026-01-27 18:59:30.480399936 +0000 UTC m=+1012.942942819" observedRunningTime="2026-01-27 18:59:30.968514643 +0000 UTC m=+1013.431057567" watchObservedRunningTime="2026-01-27 18:59:30.982707736 +0000 UTC m=+1013.445250629" Jan 27 18:59:31 crc kubenswrapper[4853]: I0127 18:59:31.002829 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kthft\" (UniqueName: \"kubernetes.io/projected/ed915364-aeae-47d1-9cc2-b9e14ce361e7-kube-api-access-kthft\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:32 crc kubenswrapper[4853]: I0127 18:59:32.574528 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 18:59:32 crc kubenswrapper[4853]: I0127 18:59:32.667104 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6t7wq"] Jan 27 18:59:32 crc kubenswrapper[4853]: I0127 18:59:32.667404 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-6t7wq" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerName="dnsmasq-dns" containerID="cri-o://781a77592d5a87c22d418c18ca082f3fc37812941ad2e02f82d6da689b1529c9" gracePeriod=10 Jan 27 18:59:32 crc kubenswrapper[4853]: I0127 18:59:32.979023 4853 generic.go:334] "Generic (PLEG): container finished" podID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerID="781a77592d5a87c22d418c18ca082f3fc37812941ad2e02f82d6da689b1529c9" exitCode=0 Jan 27 18:59:32 crc kubenswrapper[4853]: I0127 18:59:32.979297 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6t7wq" event={"ID":"3ba5b341-34ed-484d-ae1d-fe08f998eac4","Type":"ContainerDied","Data":"781a77592d5a87c22d418c18ca082f3fc37812941ad2e02f82d6da689b1529c9"} Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.201754 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.255964 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-dns-svc\") pod \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.256022 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rkx\" (UniqueName: \"kubernetes.io/projected/3ba5b341-34ed-484d-ae1d-fe08f998eac4-kube-api-access-s4rkx\") pod \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.256140 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-nb\") pod \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.256169 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-config\") pod \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.256190 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-sb\") pod \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\" (UID: \"3ba5b341-34ed-484d-ae1d-fe08f998eac4\") " Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.267058 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba5b341-34ed-484d-ae1d-fe08f998eac4-kube-api-access-s4rkx" (OuterVolumeSpecName: "kube-api-access-s4rkx") pod "3ba5b341-34ed-484d-ae1d-fe08f998eac4" (UID: "3ba5b341-34ed-484d-ae1d-fe08f998eac4"). InnerVolumeSpecName "kube-api-access-s4rkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.301577 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ba5b341-34ed-484d-ae1d-fe08f998eac4" (UID: "3ba5b341-34ed-484d-ae1d-fe08f998eac4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.316090 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ba5b341-34ed-484d-ae1d-fe08f998eac4" (UID: "3ba5b341-34ed-484d-ae1d-fe08f998eac4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.322365 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-config" (OuterVolumeSpecName: "config") pod "3ba5b341-34ed-484d-ae1d-fe08f998eac4" (UID: "3ba5b341-34ed-484d-ae1d-fe08f998eac4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.324395 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ba5b341-34ed-484d-ae1d-fe08f998eac4" (UID: "3ba5b341-34ed-484d-ae1d-fe08f998eac4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.358013 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.358073 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rkx\" (UniqueName: \"kubernetes.io/projected/3ba5b341-34ed-484d-ae1d-fe08f998eac4-kube-api-access-s4rkx\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.358088 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.358099 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.358110 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba5b341-34ed-484d-ae1d-fe08f998eac4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.990973 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6t7wq" event={"ID":"3ba5b341-34ed-484d-ae1d-fe08f998eac4","Type":"ContainerDied","Data":"5cf7d85bedbb4624075dac8335097026e432efa55af408f280cc412774e8bfdd"} Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.991017 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6t7wq" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.991031 4853 scope.go:117] "RemoveContainer" containerID="781a77592d5a87c22d418c18ca082f3fc37812941ad2e02f82d6da689b1529c9" Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.993212 4853 generic.go:334] "Generic (PLEG): container finished" podID="0d107577-39f6-4463-80b0-374fc14e89e8" containerID="e7ef370f5c0148d0a484beb9a0a87fcffadf5e7c6eee89e34ce7f539d156a007" exitCode=0 Jan 27 18:59:33 crc kubenswrapper[4853]: I0127 18:59:33.993260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pwl2" event={"ID":"0d107577-39f6-4463-80b0-374fc14e89e8","Type":"ContainerDied","Data":"e7ef370f5c0148d0a484beb9a0a87fcffadf5e7c6eee89e34ce7f539d156a007"} Jan 27 18:59:34 crc kubenswrapper[4853]: I0127 18:59:34.019507 4853 scope.go:117] "RemoveContainer" containerID="e9da18a2d3905bdae2a58a40e150d319a91d279c41a01e577d322df9cc99d59d" Jan 27 18:59:34 crc kubenswrapper[4853]: I0127 18:59:34.046703 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6t7wq"] Jan 27 18:59:34 crc kubenswrapper[4853]: I0127 18:59:34.048739 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6t7wq"] Jan 27 18:59:34 crc kubenswrapper[4853]: I0127 18:59:34.122654 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" path="/var/lib/kubelet/pods/3ba5b341-34ed-484d-ae1d-fe08f998eac4/volumes" Jan 27 18:59:34 crc kubenswrapper[4853]: I0127 18:59:34.345428 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.359641 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.487917 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-combined-ca-bundle\") pod \"0d107577-39f6-4463-80b0-374fc14e89e8\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.488071 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8h2\" (UniqueName: \"kubernetes.io/projected/0d107577-39f6-4463-80b0-374fc14e89e8-kube-api-access-gl8h2\") pod \"0d107577-39f6-4463-80b0-374fc14e89e8\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.488201 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-config-data\") pod \"0d107577-39f6-4463-80b0-374fc14e89e8\" (UID: \"0d107577-39f6-4463-80b0-374fc14e89e8\") " Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.495434 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d107577-39f6-4463-80b0-374fc14e89e8-kube-api-access-gl8h2" (OuterVolumeSpecName: "kube-api-access-gl8h2") pod "0d107577-39f6-4463-80b0-374fc14e89e8" (UID: "0d107577-39f6-4463-80b0-374fc14e89e8"). InnerVolumeSpecName "kube-api-access-gl8h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.517247 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d107577-39f6-4463-80b0-374fc14e89e8" (UID: "0d107577-39f6-4463-80b0-374fc14e89e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.554392 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-config-data" (OuterVolumeSpecName: "config-data") pod "0d107577-39f6-4463-80b0-374fc14e89e8" (UID: "0d107577-39f6-4463-80b0-374fc14e89e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.590748 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.590796 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d107577-39f6-4463-80b0-374fc14e89e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:35 crc kubenswrapper[4853]: I0127 18:59:35.590812 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl8h2\" (UniqueName: \"kubernetes.io/projected/0d107577-39f6-4463-80b0-374fc14e89e8-kube-api-access-gl8h2\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.037650 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pwl2" event={"ID":"0d107577-39f6-4463-80b0-374fc14e89e8","Type":"ContainerDied","Data":"a87d639d2a23f35d258d052827b86c5a8af182cf5b71f118ae94a0bd3d77c3ec"} Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.037706 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a87d639d2a23f35d258d052827b86c5a8af182cf5b71f118ae94a0bd3d77c3ec" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.037733 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pwl2" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317022 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sz8dr"] Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317412 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d107577-39f6-4463-80b0-374fc14e89e8" containerName="keystone-db-sync" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317431 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d107577-39f6-4463-80b0-374fc14e89e8" containerName="keystone-db-sync" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317448 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15035d73-a434-4ff6-9ec0-ecdf17c78d5d" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317457 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="15035d73-a434-4ff6-9ec0-ecdf17c78d5d" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317469 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d31d088-3d59-4f93-bd69-5c656de66500" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317481 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d31d088-3d59-4f93-bd69-5c656de66500" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317495 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed915364-aeae-47d1-9cc2-b9e14ce361e7" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317500 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed915364-aeae-47d1-9cc2-b9e14ce361e7" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317510 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerName="init" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317516 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerName="init" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317528 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113b00-48db-4a7d-b15a-fafe0a932311" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317534 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113b00-48db-4a7d-b15a-fafe0a932311" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317550 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f709328-85ea-42f5-8d5b-d302b907bee3" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317555 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f709328-85ea-42f5-8d5b-d302b907bee3" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317567 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0f2528-c8dc-4170-b594-94945759de99" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317573 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0f2528-c8dc-4170-b594-94945759de99" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: E0127 18:59:36.317585 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerName="dnsmasq-dns" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317590 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerName="dnsmasq-dns" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317737 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f709328-85ea-42f5-8d5b-d302b907bee3" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317750 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed915364-aeae-47d1-9cc2-b9e14ce361e7" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317763 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="15035d73-a434-4ff6-9ec0-ecdf17c78d5d" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317775 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113b00-48db-4a7d-b15a-fafe0a932311" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317783 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0f2528-c8dc-4170-b594-94945759de99" containerName="mariadb-database-create" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317819 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d107577-39f6-4463-80b0-374fc14e89e8" containerName="keystone-db-sync" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317828 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d31d088-3d59-4f93-bd69-5c656de66500" containerName="mariadb-account-create-update" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.317836 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba5b341-34ed-484d-ae1d-fe08f998eac4" containerName="dnsmasq-dns" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.318668 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.352732 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-427vl"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.354174 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.360702 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.364982 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.365260 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.365428 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.367548 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sz8dr"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.369528 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kgs4m" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.398630 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-427vl"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.508831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-config\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.508899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-fernet-keys\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.508946 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.508975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-credential-keys\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509000 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-scripts\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509018 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbk6\" (UniqueName: \"kubernetes.io/projected/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-kube-api-access-mxbk6\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509040 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-config-data\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509058 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpkzm\" (UniqueName: \"kubernetes.io/projected/9275e5f7-1630-4266-abb5-0ba701de33cb-kube-api-access-vpkzm\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509177 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-combined-ca-bundle\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.509216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-svc\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.607190 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c8b67f5cc-gmbgv"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.609223 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-scripts\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612079 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-config-data\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612144 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-combined-ca-bundle\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612169 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-svc\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612192 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-config\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612222 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-fernet-keys\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612244 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc48m\" (UniqueName: \"kubernetes.io/projected/b62f23c7-d81a-4925-a2c3-10c410912a0f-kube-api-access-qc48m\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612281 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b62f23c7-d81a-4925-a2c3-10c410912a0f-horizon-secret-key\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612302 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-credential-keys\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612351 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-scripts\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbk6\" (UniqueName: \"kubernetes.io/projected/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-kube-api-access-mxbk6\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612393 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-config-data\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612415 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b62f23c7-d81a-4925-a2c3-10c410912a0f-logs\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612441 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpkzm\" (UniqueName: \"kubernetes.io/projected/9275e5f7-1630-4266-abb5-0ba701de33cb-kube-api-access-vpkzm\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.612479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.613584 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-config\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.613644 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.614233 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.615102 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.615439 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-fcj6z" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.615568 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.622577 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.623913 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c8b67f5cc-gmbgv"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.626910 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-svc\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.629233 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.636009 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-scripts\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.636496 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-config-data\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.640294 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rfjdk"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.642824 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.645768 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-fernet-keys\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.657109 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9bclj"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.669706 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.670792 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-credential-keys\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.671213 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.671860 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fjxb2" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.680401 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-combined-ca-bundle\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.695710 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.695909 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbk6\" (UniqueName: \"kubernetes.io/projected/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-kube-api-access-mxbk6\") pod \"dnsmasq-dns-847c4cc679-sz8dr\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.695998 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.696106 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-25lg4" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.702972 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpkzm\" (UniqueName: \"kubernetes.io/projected/9275e5f7-1630-4266-abb5-0ba701de33cb-kube-api-access-vpkzm\") pod \"keystone-bootstrap-427vl\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.712540 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-combined-ca-bundle\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-scripts\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715631 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-config-data\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715678 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc48m\" (UniqueName: \"kubernetes.io/projected/b62f23c7-d81a-4925-a2c3-10c410912a0f-kube-api-access-qc48m\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715711 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b62f23c7-d81a-4925-a2c3-10c410912a0f-horizon-secret-key\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715732 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-config\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715758 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtgnl\" (UniqueName: \"kubernetes.io/projected/8e865945-20c8-4b2d-a52b-62dd1450181b-kube-api-access-wtgnl\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.715797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b62f23c7-d81a-4925-a2c3-10c410912a0f-logs\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.719105 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-scripts\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.730617 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9bclj"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.744193 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-config-data\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.747286 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b62f23c7-d81a-4925-a2c3-10c410912a0f-horizon-secret-key\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.747426 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b62f23c7-d81a-4925-a2c3-10c410912a0f-logs\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.768843 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc48m\" (UniqueName: \"kubernetes.io/projected/b62f23c7-d81a-4925-a2c3-10c410912a0f-kube-api-access-qc48m\") pod \"horizon-6c8b67f5cc-gmbgv\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.771929 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rfjdk"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.815953 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.826740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-db-sync-config-data\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.826947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-combined-ca-bundle\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.827069 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8l6t\" (UniqueName: \"kubernetes.io/projected/b1d33900-476d-4c86-a501-4490c01000ca-kube-api-access-t8l6t\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.827095 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-combined-ca-bundle\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.827264 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1d33900-476d-4c86-a501-4490c01000ca-etc-machine-id\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.830707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-combined-ca-bundle\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.836459 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-config\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.836584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-scripts\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.836653 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtgnl\" (UniqueName: \"kubernetes.io/projected/8e865945-20c8-4b2d-a52b-62dd1450181b-kube-api-access-wtgnl\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.837065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-config-data\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.840919 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-config\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.853944 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-b7gbn"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.870556 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.875345 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jl89l" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.875663 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.915023 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-b7gbn"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.916586 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtgnl\" (UniqueName: \"kubernetes.io/projected/8e865945-20c8-4b2d-a52b-62dd1450181b-kube-api-access-wtgnl\") pod \"neutron-db-sync-9bclj\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.925735 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nzrdc"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.928035 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.942764 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.944713 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.945941 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-st74w" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946378 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-scripts\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946446 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-config-data\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946582 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-config-data\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-db-sync-config-data\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nztcc\" (UniqueName: \"kubernetes.io/projected/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-kube-api-access-nztcc\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946711 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb20c48-23bc-4c0d-92de-f87015fac932-logs\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946748 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8l6t\" (UniqueName: \"kubernetes.io/projected/b1d33900-476d-4c86-a501-4490c01000ca-kube-api-access-t8l6t\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-combined-ca-bundle\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946806 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-scripts\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946844 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-combined-ca-bundle\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946879 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-combined-ca-bundle\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946920 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-db-sync-config-data\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.946958 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1d33900-476d-4c86-a501-4490c01000ca-etc-machine-id\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.947001 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9k7\" (UniqueName: \"kubernetes.io/projected/3bb20c48-23bc-4c0d-92de-f87015fac932-kube-api-access-sr9k7\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.949140 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.951342 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1d33900-476d-4c86-a501-4490c01000ca-etc-machine-id\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.951473 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nzrdc"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.974782 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-scripts\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.977163 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-db-sync-config-data\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.980337 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-config-data\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.980617 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.987191 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.989330 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.994336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-combined-ca-bundle\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:36 crc kubenswrapper[4853]: I0127 18:59:36.994364 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.001034 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.015069 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sz8dr"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.023709 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.036964 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8l6t\" (UniqueName: \"kubernetes.io/projected/b1d33900-476d-4c86-a501-4490c01000ca-kube-api-access-t8l6t\") pod \"cinder-db-sync-rfjdk\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049240 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nztcc\" (UniqueName: \"kubernetes.io/projected/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-kube-api-access-nztcc\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb20c48-23bc-4c0d-92de-f87015fac932-logs\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049335 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049354 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slfzv\" (UniqueName: \"kubernetes.io/projected/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-kube-api-access-slfzv\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049371 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-scripts\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049392 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-config-data\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049413 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-combined-ca-bundle\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049441 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-combined-ca-bundle\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049462 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-run-httpd\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-db-sync-config-data\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9k7\" (UniqueName: \"kubernetes.io/projected/3bb20c48-23bc-4c0d-92de-f87015fac932-kube-api-access-sr9k7\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049536 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-scripts\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-config-data\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049593 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-log-httpd\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.049616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.050346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb20c48-23bc-4c0d-92de-f87015fac932-logs\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.081199 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-db-sync-config-data\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.082608 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-scripts\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.086146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-combined-ca-bundle\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.091861 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-combined-ca-bundle\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.102318 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-snr4r"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.103907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.106262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-config-data\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.109857 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nztcc\" (UniqueName: \"kubernetes.io/projected/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-kube-api-access-nztcc\") pod \"barbican-db-sync-b7gbn\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152459 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-log-httpd\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152504 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-config\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152595 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152613 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152629 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slfzv\" (UniqueName: \"kubernetes.io/projected/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-kube-api-access-slfzv\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-config-data\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152728 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-run-httpd\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152745 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksjl\" (UniqueName: \"kubernetes.io/projected/6d63a071-50d0-4387-a817-9d65506ac62b-kube-api-access-gksjl\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152795 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.152829 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-scripts\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.153656 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-log-httpd\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.158888 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-run-httpd\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.161900 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.194554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-scripts\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.199889 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rfjdk" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.213926 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-config-data\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.216359 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9k7\" (UniqueName: \"kubernetes.io/projected/3bb20c48-23bc-4c0d-92de-f87015fac932-kube-api-access-sr9k7\") pod \"placement-db-sync-nzrdc\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.216831 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.221395 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9bclj" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.279532 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b7gbn" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.280545 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.321750 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.321796 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.321941 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-config\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.322089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.322891 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.322952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gksjl\" (UniqueName: \"kubernetes.io/projected/6d63a071-50d0-4387-a817-9d65506ac62b-kube-api-access-gksjl\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.426176 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.431260 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.431736 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.432537 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-config\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.434506 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gksjl\" (UniqueName: \"kubernetes.io/projected/6d63a071-50d0-4387-a817-9d65506ac62b-kube-api-access-gksjl\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.434915 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.436598 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nzrdc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.440090 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slfzv\" (UniqueName: \"kubernetes.io/projected/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-kube-api-access-slfzv\") pod \"ceilometer-0\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.471164 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.471953 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.472066 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-84jgd" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.472277 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.486473 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-snr4r\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.508064 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541605 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnncm\" (UniqueName: \"kubernetes.io/projected/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-kube-api-access-pnncm\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541678 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541796 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541837 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541874 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-logs\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.541907 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.546225 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-snr4r"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.557269 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.623541 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dbbdbb6d9-g899w"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.625219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643139 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643190 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643225 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643245 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-logs\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643365 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnncm\" (UniqueName: \"kubernetes.io/projected/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-kube-api-access-pnncm\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.643384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.648743 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dbbdbb6d9-g899w"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.649337 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.649604 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-logs\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.653781 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.693425 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.709183 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.709621 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.715665 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.715595 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.720420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.735387 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.736961 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnncm\" (UniqueName: \"kubernetes.io/projected/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-kube-api-access-pnncm\") pod \"glance-default-external-api-0\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.739951 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.747484 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.751948 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e556ea12-6992-4aba-be03-e6d4a2823b74-horizon-secret-key\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.752109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556ea12-6992-4aba-be03-e6d4a2823b74-logs\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.752222 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-scripts\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.752307 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-config-data\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.752379 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjpg\" (UniqueName: \"kubernetes.io/projected/e556ea12-6992-4aba-be03-e6d4a2823b74-kube-api-access-8bjpg\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.752601 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.756627 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.840306 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.854965 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bjpg\" (UniqueName: \"kubernetes.io/projected/e556ea12-6992-4aba-be03-e6d4a2823b74-kube-api-access-8bjpg\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.855056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.855098 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e556ea12-6992-4aba-be03-e6d4a2823b74-horizon-secret-key\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.855233 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.856654 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556ea12-6992-4aba-be03-e6d4a2823b74-logs\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.857327 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.857499 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-logs\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.857670 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.857819 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-scripts\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.858017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.858380 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.858548 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fsm\" (UniqueName: \"kubernetes.io/projected/8311e980-1d3c-456d-9c17-5890f55976eb-kube-api-access-s9fsm\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.858668 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556ea12-6992-4aba-be03-e6d4a2823b74-logs\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.858703 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-config-data\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.860903 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-scripts\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.862352 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-config-data\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.871098 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e556ea12-6992-4aba-be03-e6d4a2823b74-horizon-secret-key\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.890385 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bjpg\" (UniqueName: \"kubernetes.io/projected/e556ea12-6992-4aba-be03-e6d4a2823b74-kube-api-access-8bjpg\") pod \"horizon-5dbbdbb6d9-g899w\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963007 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963393 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963464 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-logs\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963515 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.963591 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fsm\" (UniqueName: \"kubernetes.io/projected/8311e980-1d3c-456d-9c17-5890f55976eb-kube-api-access-s9fsm\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.964933 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-logs\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.965230 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.972657 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.980015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.984319 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.990004 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fsm\" (UniqueName: \"kubernetes.io/projected/8311e980-1d3c-456d-9c17-5890f55976eb-kube-api-access-s9fsm\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.989585 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:37 crc kubenswrapper[4853]: I0127 18:59:37.992953 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.022834 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.204374 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.219079 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.325407 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sz8dr"] Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.429939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c8b67f5cc-gmbgv"] Jan 27 18:59:38 crc kubenswrapper[4853]: W0127 18:59:38.430756 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb62f23c7_d81a_4925_a2c3_10c410912a0f.slice/crio-901d9248fb74793e8ef2227c299562487faf838e63445e2e8c0b54a1bb4be7dc WatchSource:0}: Error finding container 901d9248fb74793e8ef2227c299562487faf838e63445e2e8c0b54a1bb4be7dc: Status 404 returned error can't find the container with id 901d9248fb74793e8ef2227c299562487faf838e63445e2e8c0b54a1bb4be7dc Jan 27 18:59:38 crc kubenswrapper[4853]: W0127 18:59:38.567478 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9275e5f7_1630_4266_abb5_0ba701de33cb.slice/crio-7bf3a61e50951217f32fd03b0e41a6ce429e8b04ccfafce4b37c2af7e8be915a WatchSource:0}: Error finding container 7bf3a61e50951217f32fd03b0e41a6ce429e8b04ccfafce4b37c2af7e8be915a: Status 404 returned error can't find the container with id 7bf3a61e50951217f32fd03b0e41a6ce429e8b04ccfafce4b37c2af7e8be915a Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.572753 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-427vl"] Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.590401 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.733536 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rfjdk"] Jan 27 18:59:38 crc kubenswrapper[4853]: I0127 18:59:38.769455 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-b7gbn"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.152872 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-snr4r"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.173201 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nzrdc"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.175751 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8b67f5cc-gmbgv" event={"ID":"b62f23c7-d81a-4925-a2c3-10c410912a0f","Type":"ContainerStarted","Data":"901d9248fb74793e8ef2227c299562487faf838e63445e2e8c0b54a1bb4be7dc"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.198511 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b7gbn" event={"ID":"8ae89dc3-4a08-42bd-a234-b5e8f948dc23","Type":"ContainerStarted","Data":"7245d827850f957f8f9c822bed4b84cf235312a73f07aebd24c35196310a0cc2"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.218527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rfjdk" event={"ID":"b1d33900-476d-4c86-a501-4490c01000ca","Type":"ContainerStarted","Data":"08f659cff19f95e0e6be362f184c26b863f7c22df64a15a03265d516a58926fe"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.219639 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.232756 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9bclj"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.235184 4853 generic.go:334] "Generic (PLEG): container finished" podID="2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" containerID="3f1df4ef2a2fca313a3091b931c1145ef7ba07b1fd663a5909f7a6364da9fc85" exitCode=0 Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.235256 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" event={"ID":"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0","Type":"ContainerDied","Data":"3f1df4ef2a2fca313a3091b931c1145ef7ba07b1fd663a5909f7a6364da9fc85"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.235284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" event={"ID":"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0","Type":"ContainerStarted","Data":"acc1ca212d9e69b1e46e3498b800622576bde8c04eff30ae97a15efabc0c7d98"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.250030 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-427vl" event={"ID":"9275e5f7-1630-4266-abb5-0ba701de33cb","Type":"ContainerStarted","Data":"95f7f716d21983ac00fd2ad48c591221d3265f4a4551f121bc2eb408014d170e"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.250079 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-427vl" event={"ID":"9275e5f7-1630-4266-abb5-0ba701de33cb","Type":"ContainerStarted","Data":"7bf3a61e50951217f32fd03b0e41a6ce429e8b04ccfafce4b37c2af7e8be915a"} Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.275033 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.301856 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dbbdbb6d9-g899w"] Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.310853 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-427vl" podStartSLOduration=3.310832462 podStartE2EDuration="3.310832462s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:39.292624556 +0000 UTC m=+1021.755167439" watchObservedRunningTime="2026-01-27 18:59:39.310832462 +0000 UTC m=+1021.773375345" Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.423136 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:39 crc kubenswrapper[4853]: W0127 18:59:39.465750 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8311e980_1d3c_456d_9c17_5890f55976eb.slice/crio-9887a4b4087c2acfb38aeb2b58a2e64bc6efb1e76015a83acd8ea8d18e14b520 WatchSource:0}: Error finding container 9887a4b4087c2acfb38aeb2b58a2e64bc6efb1e76015a83acd8ea8d18e14b520: Status 404 returned error can't find the container with id 9887a4b4087c2acfb38aeb2b58a2e64bc6efb1e76015a83acd8ea8d18e14b520 Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.879314 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.933199 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-nb\") pod \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.933308 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-swift-storage-0\") pod \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.933383 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-config\") pod \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.933426 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-sb\") pod \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.933448 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxbk6\" (UniqueName: \"kubernetes.io/projected/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-kube-api-access-mxbk6\") pod \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.933539 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-svc\") pod \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\" (UID: \"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0\") " Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.942217 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-kube-api-access-mxbk6" (OuterVolumeSpecName: "kube-api-access-mxbk6") pod "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" (UID: "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0"). InnerVolumeSpecName "kube-api-access-mxbk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.963033 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-config" (OuterVolumeSpecName: "config") pod "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" (UID: "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.987344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" (UID: "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.993100 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" (UID: "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:39 crc kubenswrapper[4853]: I0127 18:59:39.994669 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" (UID: "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.005343 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" (UID: "2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.038189 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.038262 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.038276 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.038286 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.038295 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.038303 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxbk6\" (UniqueName: \"kubernetes.io/projected/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0-kube-api-access-mxbk6\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.328163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9bclj" event={"ID":"8e865945-20c8-4b2d-a52b-62dd1450181b","Type":"ContainerStarted","Data":"72f8704515accc7e94f5ee597aea700d0822e5bfa077b1dfd32254b94ad59eac"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.328242 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9bclj" event={"ID":"8e865945-20c8-4b2d-a52b-62dd1450181b","Type":"ContainerStarted","Data":"ab49a4e87eec020a273dead26c44c58eec5d09587d812c8fe3b11238807d2947"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.334392 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d63a071-50d0-4387-a817-9d65506ac62b" containerID="ab90fa3dd5e59b14892093256632cf9e0ed63f6454fc14368dfa96c81e3892e4" exitCode=0 Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.334461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" event={"ID":"6d63a071-50d0-4387-a817-9d65506ac62b","Type":"ContainerDied","Data":"ab90fa3dd5e59b14892093256632cf9e0ed63f6454fc14368dfa96c81e3892e4"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.334483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" event={"ID":"6d63a071-50d0-4387-a817-9d65506ac62b","Type":"ContainerStarted","Data":"b2bf4edb243956411b36cace855f295752e45d2d34e9de3674924a640baa6f0c"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.339821 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8311e980-1d3c-456d-9c17-5890f55976eb","Type":"ContainerStarted","Data":"9887a4b4087c2acfb38aeb2b58a2e64bc6efb1e76015a83acd8ea8d18e14b520"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.349426 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nzrdc" event={"ID":"3bb20c48-23bc-4c0d-92de-f87015fac932","Type":"ContainerStarted","Data":"7042937814d05dd4548b47cedcb0095aa57f178b542b734ff19a928621d6519d"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.353660 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bd9f2ec-70e8-40ef-8903-fcca47efbc95","Type":"ContainerStarted","Data":"48ff19c9a9c57a4947f68619867cdea39c7b7ee316e706e2e321c4e743179604"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.360160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" event={"ID":"2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0","Type":"ContainerDied","Data":"acc1ca212d9e69b1e46e3498b800622576bde8c04eff30ae97a15efabc0c7d98"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.360218 4853 scope.go:117] "RemoveContainer" containerID="3f1df4ef2a2fca313a3091b931c1145ef7ba07b1fd663a5909f7a6364da9fc85" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.361145 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sz8dr" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.371637 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9bclj" podStartSLOduration=4.371609994 podStartE2EDuration="4.371609994s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:40.365578423 +0000 UTC m=+1022.828121306" watchObservedRunningTime="2026-01-27 18:59:40.371609994 +0000 UTC m=+1022.834152877" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.378246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerStarted","Data":"e32ee15220f7d891f3df54753a53f9fb6b061f6089387558882f9812b6e18926"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.408235 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbbdbb6d9-g899w" event={"ID":"e556ea12-6992-4aba-be03-e6d4a2823b74","Type":"ContainerStarted","Data":"87003f46385ca18c520ee4054c0e8c556667e16034f4a15d97f739722da61cab"} Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.734137 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sz8dr"] Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.745932 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sz8dr"] Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.776205 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.800191 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.809860 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c8b67f5cc-gmbgv"] Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.837615 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.850765 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-549f7c8989-k8qw5"] Jan 27 18:59:40 crc kubenswrapper[4853]: E0127 18:59:40.851713 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" containerName="init" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.851738 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" containerName="init" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.852034 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" containerName="init" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.853423 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:40 crc kubenswrapper[4853]: I0127 18:59:40.865404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-549f7c8989-k8qw5"] Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.039273 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-scripts\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.039373 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.039469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c93d763-a677-4df8-9846-5fa96f76e0ab-logs\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.039506 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6p5\" (UniqueName: \"kubernetes.io/projected/9c93d763-a677-4df8-9846-5fa96f76e0ab-kube-api-access-wn6p5\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.039528 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c93d763-a677-4df8-9846-5fa96f76e0ab-horizon-secret-key\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.144293 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c93d763-a677-4df8-9846-5fa96f76e0ab-logs\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.144359 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6p5\" (UniqueName: \"kubernetes.io/projected/9c93d763-a677-4df8-9846-5fa96f76e0ab-kube-api-access-wn6p5\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.144387 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c93d763-a677-4df8-9846-5fa96f76e0ab-horizon-secret-key\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.144453 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-scripts\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.144544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.144774 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c93d763-a677-4df8-9846-5fa96f76e0ab-logs\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.145840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.146039 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-scripts\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.170307 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c93d763-a677-4df8-9846-5fa96f76e0ab-horizon-secret-key\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.170712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6p5\" (UniqueName: \"kubernetes.io/projected/9c93d763-a677-4df8-9846-5fa96f76e0ab-kube-api-access-wn6p5\") pod \"horizon-549f7c8989-k8qw5\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.249041 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.402893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" event={"ID":"6d63a071-50d0-4387-a817-9d65506ac62b","Type":"ContainerStarted","Data":"afc470f3509df01b0e202009191a6d8823df4dba071a2a16b5568dfa46544a66"} Jan 27 18:59:41 crc kubenswrapper[4853]: I0127 18:59:41.879746 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-549f7c8989-k8qw5"] Jan 27 18:59:42 crc kubenswrapper[4853]: I0127 18:59:42.158255 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0" path="/var/lib/kubelet/pods/2cca663f-f2e6-4c77-94c5-f7ad0bcc3ed0/volumes" Jan 27 18:59:42 crc kubenswrapper[4853]: I0127 18:59:42.420071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8311e980-1d3c-456d-9c17-5890f55976eb","Type":"ContainerStarted","Data":"727e3bd34178cd0a74c35dab1daaab86879e94a8f0d557ea63af726811459935"} Jan 27 18:59:42 crc kubenswrapper[4853]: I0127 18:59:42.437636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bd9f2ec-70e8-40ef-8903-fcca47efbc95","Type":"ContainerStarted","Data":"9a0d5065adb04fe451ec46d7dc537dc2f2c62e4343095f159ac567f8c87756c1"} Jan 27 18:59:42 crc kubenswrapper[4853]: I0127 18:59:42.443382 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549f7c8989-k8qw5" event={"ID":"9c93d763-a677-4df8-9846-5fa96f76e0ab","Type":"ContainerStarted","Data":"8948182430b8915801e47adaaac9fb0d858320eaeafbd9d8cf660c91ecbcd575"} Jan 27 18:59:42 crc kubenswrapper[4853]: I0127 18:59:42.443452 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:42 crc kubenswrapper[4853]: I0127 18:59:42.472593 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" podStartSLOduration=6.472560312 podStartE2EDuration="6.472560312s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:42.466902931 +0000 UTC m=+1024.929445814" watchObservedRunningTime="2026-01-27 18:59:42.472560312 +0000 UTC m=+1024.935103205" Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.467204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bd9f2ec-70e8-40ef-8903-fcca47efbc95","Type":"ContainerStarted","Data":"ada1e654060816ea35e2c8a59f1e25a5ecdd6719b8d6e8599c5449052b3ebf1b"} Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.467523 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-log" containerID="cri-o://9a0d5065adb04fe451ec46d7dc537dc2f2c62e4343095f159ac567f8c87756c1" gracePeriod=30 Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.467913 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-httpd" containerID="cri-o://ada1e654060816ea35e2c8a59f1e25a5ecdd6719b8d6e8599c5449052b3ebf1b" gracePeriod=30 Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.483940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8311e980-1d3c-456d-9c17-5890f55976eb","Type":"ContainerStarted","Data":"ff9e4fd4a34ff59577d435a0bf8d7a19850256a7d26cf78d9df1648705263513"} Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.484304 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-httpd" containerID="cri-o://ff9e4fd4a34ff59577d435a0bf8d7a19850256a7d26cf78d9df1648705263513" gracePeriod=30 Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.484299 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-log" containerID="cri-o://727e3bd34178cd0a74c35dab1daaab86879e94a8f0d557ea63af726811459935" gracePeriod=30 Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.521500 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.521481267 podStartE2EDuration="7.521481267s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:43.515677773 +0000 UTC m=+1025.978220676" watchObservedRunningTime="2026-01-27 18:59:43.521481267 +0000 UTC m=+1025.984024150" Jan 27 18:59:43 crc kubenswrapper[4853]: I0127 18:59:43.566850 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.566825833 podStartE2EDuration="6.566825833s" podCreationTimestamp="2026-01-27 18:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:43.548503843 +0000 UTC m=+1026.011046726" watchObservedRunningTime="2026-01-27 18:59:43.566825833 +0000 UTC m=+1026.029368716" Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.502793 4853 generic.go:334] "Generic (PLEG): container finished" podID="8311e980-1d3c-456d-9c17-5890f55976eb" containerID="ff9e4fd4a34ff59577d435a0bf8d7a19850256a7d26cf78d9df1648705263513" exitCode=0 Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.503069 4853 generic.go:334] "Generic (PLEG): container finished" podID="8311e980-1d3c-456d-9c17-5890f55976eb" containerID="727e3bd34178cd0a74c35dab1daaab86879e94a8f0d557ea63af726811459935" exitCode=143 Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.503148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8311e980-1d3c-456d-9c17-5890f55976eb","Type":"ContainerDied","Data":"ff9e4fd4a34ff59577d435a0bf8d7a19850256a7d26cf78d9df1648705263513"} Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.503183 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8311e980-1d3c-456d-9c17-5890f55976eb","Type":"ContainerDied","Data":"727e3bd34178cd0a74c35dab1daaab86879e94a8f0d557ea63af726811459935"} Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.505682 4853 generic.go:334] "Generic (PLEG): container finished" podID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerID="ada1e654060816ea35e2c8a59f1e25a5ecdd6719b8d6e8599c5449052b3ebf1b" exitCode=0 Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.505698 4853 generic.go:334] "Generic (PLEG): container finished" podID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerID="9a0d5065adb04fe451ec46d7dc537dc2f2c62e4343095f159ac567f8c87756c1" exitCode=143 Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.505729 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bd9f2ec-70e8-40ef-8903-fcca47efbc95","Type":"ContainerDied","Data":"ada1e654060816ea35e2c8a59f1e25a5ecdd6719b8d6e8599c5449052b3ebf1b"} Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.505746 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bd9f2ec-70e8-40ef-8903-fcca47efbc95","Type":"ContainerDied","Data":"9a0d5065adb04fe451ec46d7dc537dc2f2c62e4343095f159ac567f8c87756c1"} Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.511007 4853 generic.go:334] "Generic (PLEG): container finished" podID="9275e5f7-1630-4266-abb5-0ba701de33cb" containerID="95f7f716d21983ac00fd2ad48c591221d3265f4a4551f121bc2eb408014d170e" exitCode=0 Jan 27 18:59:44 crc kubenswrapper[4853]: I0127 18:59:44.511046 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-427vl" event={"ID":"9275e5f7-1630-4266-abb5-0ba701de33cb","Type":"ContainerDied","Data":"95f7f716d21983ac00fd2ad48c591221d3265f4a4551f121bc2eb408014d170e"} Jan 27 18:59:45 crc kubenswrapper[4853]: I0127 18:59:45.935875 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dbbdbb6d9-g899w"] Jan 27 18:59:45 crc kubenswrapper[4853]: I0127 18:59:45.981191 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c78c8d4f6-bchzm"] Jan 27 18:59:45 crc kubenswrapper[4853]: I0127 18:59:45.983278 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:45 crc kubenswrapper[4853]: I0127 18:59:45.990418 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 18:59:45 crc kubenswrapper[4853]: I0127 18:59:45.990619 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c78c8d4f6-bchzm"] Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.068761 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-config-data\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.068870 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f114cd-daca-4c71-9ecd-64b8008ddbef-logs\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.068894 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxrw\" (UniqueName: \"kubernetes.io/projected/28f114cd-daca-4c71-9ecd-64b8008ddbef-kube-api-access-lfxrw\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.068932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-combined-ca-bundle\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.068970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-tls-certs\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.068992 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-scripts\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.069018 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-secret-key\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.096761 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-549f7c8989-k8qw5"] Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171022 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f114cd-daca-4c71-9ecd-64b8008ddbef-logs\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171078 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxrw\" (UniqueName: \"kubernetes.io/projected/28f114cd-daca-4c71-9ecd-64b8008ddbef-kube-api-access-lfxrw\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171151 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-combined-ca-bundle\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171193 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-tls-certs\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-scripts\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-secret-key\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.171332 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-config-data\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.172821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-config-data\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.173098 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f114cd-daca-4c71-9ecd-64b8008ddbef-logs\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.175771 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-scripts\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.184995 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-secret-key\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.188266 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-tls-certs\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.189749 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-combined-ca-bundle\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.192835 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69967664fb-pbqhr"] Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.194888 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69967664fb-pbqhr"] Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.194992 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.249975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxrw\" (UniqueName: \"kubernetes.io/projected/28f114cd-daca-4c71-9ecd-64b8008ddbef-kube-api-access-lfxrw\") pod \"horizon-c78c8d4f6-bchzm\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273559 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d621f7-387b-470d-8e42-bebbfada3bbc-logs\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273632 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-combined-ca-bundle\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273676 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66d621f7-387b-470d-8e42-bebbfada3bbc-scripts\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273695 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66d621f7-387b-470d-8e42-bebbfada3bbc-config-data\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273731 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-horizon-tls-certs\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273755 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-horizon-secret-key\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.273838 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmk5c\" (UniqueName: \"kubernetes.io/projected/66d621f7-387b-470d-8e42-bebbfada3bbc-kube-api-access-pmk5c\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.333572 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375188 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66d621f7-387b-470d-8e42-bebbfada3bbc-scripts\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375235 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66d621f7-387b-470d-8e42-bebbfada3bbc-config-data\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375276 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-horizon-tls-certs\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375302 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-horizon-secret-key\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375341 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmk5c\" (UniqueName: \"kubernetes.io/projected/66d621f7-387b-470d-8e42-bebbfada3bbc-kube-api-access-pmk5c\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375392 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d621f7-387b-470d-8e42-bebbfada3bbc-logs\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.375427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-combined-ca-bundle\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.381798 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66d621f7-387b-470d-8e42-bebbfada3bbc-logs\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.382229 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66d621f7-387b-470d-8e42-bebbfada3bbc-scripts\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.382956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66d621f7-387b-470d-8e42-bebbfada3bbc-config-data\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.383205 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-combined-ca-bundle\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.389654 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-horizon-secret-key\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.395358 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d621f7-387b-470d-8e42-bebbfada3bbc-horizon-tls-certs\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.413474 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmk5c\" (UniqueName: \"kubernetes.io/projected/66d621f7-387b-470d-8e42-bebbfada3bbc-kube-api-access-pmk5c\") pod \"horizon-69967664fb-pbqhr\" (UID: \"66d621f7-387b-470d-8e42-bebbfada3bbc\") " pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:46 crc kubenswrapper[4853]: I0127 18:59:46.637598 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 18:59:47 crc kubenswrapper[4853]: I0127 18:59:47.559677 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 18:59:47 crc kubenswrapper[4853]: I0127 18:59:47.636470 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8qfz"] Jan 27 18:59:47 crc kubenswrapper[4853]: I0127 18:59:47.636854 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" containerID="cri-o://18efbdd72c9acab9bb286b14193bfa5e0ac7ffc5bc930f0495a2906166d13a1e" gracePeriod=10 Jan 27 18:59:48 crc kubenswrapper[4853]: I0127 18:59:48.564294 4853 generic.go:334] "Generic (PLEG): container finished" podID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerID="18efbdd72c9acab9bb286b14193bfa5e0ac7ffc5bc930f0495a2906166d13a1e" exitCode=0 Jan 27 18:59:48 crc kubenswrapper[4853]: I0127 18:59:48.564353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" event={"ID":"7922b820-92c1-46d8-a5a0-6f58e05674b5","Type":"ContainerDied","Data":"18efbdd72c9acab9bb286b14193bfa5e0ac7ffc5bc930f0495a2906166d13a1e"} Jan 27 18:59:52 crc kubenswrapper[4853]: I0127 18:59:52.573407 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 27 18:59:54 crc kubenswrapper[4853]: I0127 18:59:54.951194 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:54 crc kubenswrapper[4853]: I0127 18:59:54.958387 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048065 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-config-data\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048171 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-scripts\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048254 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-combined-ca-bundle\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048276 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fsm\" (UniqueName: \"kubernetes.io/projected/8311e980-1d3c-456d-9c17-5890f55976eb-kube-api-access-s9fsm\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048306 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-public-tls-certs\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048341 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnncm\" (UniqueName: \"kubernetes.io/projected/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-kube-api-access-pnncm\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048389 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-logs\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048412 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-scripts\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048446 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-httpd-run\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048488 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-internal-tls-certs\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048563 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-combined-ca-bundle\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-logs\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048646 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-config-data\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048698 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-httpd-run\") pod \"8311e980-1d3c-456d-9c17-5890f55976eb\" (UID: \"8311e980-1d3c-456d-9c17-5890f55976eb\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.048725 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\" (UID: \"9bd9f2ec-70e8-40ef-8903-fcca47efbc95\") " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.050364 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-logs" (OuterVolumeSpecName: "logs") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.050509 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.051936 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.056891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-logs" (OuterVolumeSpecName: "logs") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.057000 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-scripts" (OuterVolumeSpecName: "scripts") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.059806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.059850 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8311e980-1d3c-456d-9c17-5890f55976eb-kube-api-access-s9fsm" (OuterVolumeSpecName: "kube-api-access-s9fsm") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "kube-api-access-s9fsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.063148 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-kube-api-access-pnncm" (OuterVolumeSpecName: "kube-api-access-pnncm") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "kube-api-access-pnncm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.063653 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-scripts" (OuterVolumeSpecName: "scripts") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.065819 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.089404 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.101386 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.112901 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.114009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-config-data" (OuterVolumeSpecName: "config-data") pod "9bd9f2ec-70e8-40ef-8903-fcca47efbc95" (UID: "9bd9f2ec-70e8-40ef-8903-fcca47efbc95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.136553 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-config-data" (OuterVolumeSpecName: "config-data") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.140189 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8311e980-1d3c-456d-9c17-5890f55976eb" (UID: "8311e980-1d3c-456d-9c17-5890f55976eb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.151733 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.151943 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152016 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152084 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152184 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152284 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fsm\" (UniqueName: \"kubernetes.io/projected/8311e980-1d3c-456d-9c17-5890f55976eb-kube-api-access-s9fsm\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152343 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152410 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnncm\" (UniqueName: \"kubernetes.io/projected/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-kube-api-access-pnncm\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152476 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152528 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152579 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152630 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152694 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152750 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8311e980-1d3c-456d-9c17-5890f55976eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152803 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8311e980-1d3c-456d-9c17-5890f55976eb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.152865 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd9f2ec-70e8-40ef-8903-fcca47efbc95-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.175458 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.206390 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.255220 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.255259 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.624081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bd9f2ec-70e8-40ef-8903-fcca47efbc95","Type":"ContainerDied","Data":"48ff19c9a9c57a4947f68619867cdea39c7b7ee316e706e2e321c4e743179604"} Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.624166 4853 scope.go:117] "RemoveContainer" containerID="ada1e654060816ea35e2c8a59f1e25a5ecdd6719b8d6e8599c5449052b3ebf1b" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.624337 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.630829 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8311e980-1d3c-456d-9c17-5890f55976eb","Type":"ContainerDied","Data":"9887a4b4087c2acfb38aeb2b58a2e64bc6efb1e76015a83acd8ea8d18e14b520"} Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.630955 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.663266 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.684131 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.694938 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.703505 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.719498 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: E0127 18:59:55.719882 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-httpd" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.719894 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-httpd" Jan 27 18:59:55 crc kubenswrapper[4853]: E0127 18:59:55.719902 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-httpd" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.719908 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-httpd" Jan 27 18:59:55 crc kubenswrapper[4853]: E0127 18:59:55.719926 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-log" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.719932 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-log" Jan 27 18:59:55 crc kubenswrapper[4853]: E0127 18:59:55.719965 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-log" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.719973 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-log" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.720144 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-httpd" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.720157 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-httpd" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.720169 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" containerName="glance-log" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.720185 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" containerName="glance-log" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.721071 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.723087 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.723832 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-84jgd" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.724066 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.724252 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.729835 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.731317 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.733294 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.734660 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.740645 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.747479 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868160 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-config-data\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-logs\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868240 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868263 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868362 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-logs\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868518 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868569 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868586 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpzk\" (UniqueName: \"kubernetes.io/projected/383a8cff-14ac-4c26-a428-302b30622b4b-kube-api-access-7rpzk\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868725 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-scripts\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868895 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868914 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.868978 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.869060 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6t9\" (UniqueName: \"kubernetes.io/projected/1447491c-5e5b-412d-9cbd-b7bdc9a87797-kube-api-access-nk6t9\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.869094 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.970986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk6t9\" (UniqueName: \"kubernetes.io/projected/1447491c-5e5b-412d-9cbd-b7bdc9a87797-kube-api-access-nk6t9\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971074 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971107 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-config-data\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971162 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-logs\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971205 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971239 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-logs\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971265 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971302 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpzk\" (UniqueName: \"kubernetes.io/projected/383a8cff-14ac-4c26-a428-302b30622b4b-kube-api-access-7rpzk\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971703 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-scripts\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971739 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971905 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.972015 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971909 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-logs\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971915 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-logs\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.971996 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.972560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.978993 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.978993 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.984139 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-config-data\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.987649 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.987777 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.990005 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.992437 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpzk\" (UniqueName: \"kubernetes.io/projected/383a8cff-14ac-4c26-a428-302b30622b4b-kube-api-access-7rpzk\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:55 crc kubenswrapper[4853]: I0127 18:59:55.992973 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.000480 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-scripts\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.006687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk6t9\" (UniqueName: \"kubernetes.io/projected/1447491c-5e5b-412d-9cbd-b7bdc9a87797-kube-api-access-nk6t9\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.012942 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.028267 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " pod="openstack/glance-default-external-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.094257 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.109571 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.130562 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8311e980-1d3c-456d-9c17-5890f55976eb" path="/var/lib/kubelet/pods/8311e980-1d3c-456d-9c17-5890f55976eb/volumes" Jan 27 18:59:56 crc kubenswrapper[4853]: I0127 18:59:56.131256 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd9f2ec-70e8-40ef-8903-fcca47efbc95" path="/var/lib/kubelet/pods/9bd9f2ec-70e8-40ef-8903-fcca47efbc95/volumes" Jan 27 18:59:56 crc kubenswrapper[4853]: E0127 18:59:56.964242 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 27 18:59:56 crc kubenswrapper[4853]: E0127 18:59:56.964430 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr9k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-nzrdc_openstack(3bb20c48-23bc-4c0d-92de-f87015fac932): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:59:56 crc kubenswrapper[4853]: E0127 18:59:56.965607 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-nzrdc" podUID="3bb20c48-23bc-4c0d-92de-f87015fac932" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.046409 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.105875 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpkzm\" (UniqueName: \"kubernetes.io/projected/9275e5f7-1630-4266-abb5-0ba701de33cb-kube-api-access-vpkzm\") pod \"9275e5f7-1630-4266-abb5-0ba701de33cb\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.106148 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-config-data\") pod \"9275e5f7-1630-4266-abb5-0ba701de33cb\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.106180 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-credential-keys\") pod \"9275e5f7-1630-4266-abb5-0ba701de33cb\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.106298 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-scripts\") pod \"9275e5f7-1630-4266-abb5-0ba701de33cb\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.106329 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-combined-ca-bundle\") pod \"9275e5f7-1630-4266-abb5-0ba701de33cb\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.106396 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-fernet-keys\") pod \"9275e5f7-1630-4266-abb5-0ba701de33cb\" (UID: \"9275e5f7-1630-4266-abb5-0ba701de33cb\") " Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.112631 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9275e5f7-1630-4266-abb5-0ba701de33cb" (UID: "9275e5f7-1630-4266-abb5-0ba701de33cb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.116740 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-scripts" (OuterVolumeSpecName: "scripts") pod "9275e5f7-1630-4266-abb5-0ba701de33cb" (UID: "9275e5f7-1630-4266-abb5-0ba701de33cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.139732 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9275e5f7-1630-4266-abb5-0ba701de33cb-kube-api-access-vpkzm" (OuterVolumeSpecName: "kube-api-access-vpkzm") pod "9275e5f7-1630-4266-abb5-0ba701de33cb" (UID: "9275e5f7-1630-4266-abb5-0ba701de33cb"). InnerVolumeSpecName "kube-api-access-vpkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.170427 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9275e5f7-1630-4266-abb5-0ba701de33cb" (UID: "9275e5f7-1630-4266-abb5-0ba701de33cb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.176175 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9275e5f7-1630-4266-abb5-0ba701de33cb" (UID: "9275e5f7-1630-4266-abb5-0ba701de33cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.180402 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-config-data" (OuterVolumeSpecName: "config-data") pod "9275e5f7-1630-4266-abb5-0ba701de33cb" (UID: "9275e5f7-1630-4266-abb5-0ba701de33cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.209754 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.210224 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.210297 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.210396 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpkzm\" (UniqueName: \"kubernetes.io/projected/9275e5f7-1630-4266-abb5-0ba701de33cb-kube-api-access-vpkzm\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.210468 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.210519 4853 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9275e5f7-1630-4266-abb5-0ba701de33cb-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.649275 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-427vl" event={"ID":"9275e5f7-1630-4266-abb5-0ba701de33cb","Type":"ContainerDied","Data":"7bf3a61e50951217f32fd03b0e41a6ce429e8b04ccfafce4b37c2af7e8be915a"} Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.649563 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf3a61e50951217f32fd03b0e41a6ce429e8b04ccfafce4b37c2af7e8be915a" Jan 27 18:59:57 crc kubenswrapper[4853]: I0127 18:59:57.649301 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-427vl" Jan 27 18:59:57 crc kubenswrapper[4853]: E0127 18:59:57.651354 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-nzrdc" podUID="3bb20c48-23bc-4c0d-92de-f87015fac932" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.138657 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-427vl"] Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.147307 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-427vl"] Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.248156 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-928cv"] Jan 27 18:59:58 crc kubenswrapper[4853]: E0127 18:59:58.248633 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9275e5f7-1630-4266-abb5-0ba701de33cb" containerName="keystone-bootstrap" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.248656 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9275e5f7-1630-4266-abb5-0ba701de33cb" containerName="keystone-bootstrap" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.248871 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9275e5f7-1630-4266-abb5-0ba701de33cb" containerName="keystone-bootstrap" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.249758 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.252965 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.253186 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.253332 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.253425 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.253475 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kgs4m" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.269210 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-928cv"] Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.332262 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-credential-keys\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.332335 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-config-data\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.332390 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-fernet-keys\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.332421 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-scripts\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.332466 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-combined-ca-bundle\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.332560 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz552\" (UniqueName: \"kubernetes.io/projected/e0dddcf5-0747-4132-b14f-f67160ca5f27-kube-api-access-rz552\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.446729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-fernet-keys\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.446802 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-scripts\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.446850 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-combined-ca-bundle\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.446927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz552\" (UniqueName: \"kubernetes.io/projected/e0dddcf5-0747-4132-b14f-f67160ca5f27-kube-api-access-rz552\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.446982 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-credential-keys\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.447030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-config-data\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.452776 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-config-data\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.453141 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-scripts\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.453571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-combined-ca-bundle\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.454687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-credential-keys\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.459693 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-fernet-keys\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.469810 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz552\" (UniqueName: \"kubernetes.io/projected/e0dddcf5-0747-4132-b14f-f67160ca5f27-kube-api-access-rz552\") pod \"keystone-bootstrap-928cv\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " pod="openstack/keystone-bootstrap-928cv" Jan 27 18:59:58 crc kubenswrapper[4853]: I0127 18:59:58.633601 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-928cv" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.122300 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9275e5f7-1630-4266-abb5-0ba701de33cb" path="/var/lib/kubelet/pods/9275e5f7-1630-4266-abb5-0ba701de33cb/volumes" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.151941 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99"] Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.153351 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.156504 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.156541 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.203595 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99"] Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.299777 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6cxt\" (UniqueName: \"kubernetes.io/projected/b75d2295-47d6-44cb-b492-f2f84fcb7964-kube-api-access-d6cxt\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.299860 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b75d2295-47d6-44cb-b492-f2f84fcb7964-secret-volume\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.300275 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b75d2295-47d6-44cb-b492-f2f84fcb7964-config-volume\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.403277 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6cxt\" (UniqueName: \"kubernetes.io/projected/b75d2295-47d6-44cb-b492-f2f84fcb7964-kube-api-access-d6cxt\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.403408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b75d2295-47d6-44cb-b492-f2f84fcb7964-secret-volume\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.403615 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b75d2295-47d6-44cb-b492-f2f84fcb7964-config-volume\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.503625 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b75d2295-47d6-44cb-b492-f2f84fcb7964-config-volume\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.507458 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6cxt\" (UniqueName: \"kubernetes.io/projected/b75d2295-47d6-44cb-b492-f2f84fcb7964-kube-api-access-d6cxt\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.508387 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b75d2295-47d6-44cb-b492-f2f84fcb7964-secret-volume\") pod \"collect-profiles-29492340-bdc99\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:00 crc kubenswrapper[4853]: E0127 19:00:00.625647 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 27 19:00:00 crc kubenswrapper[4853]: E0127 19:00:00.626409 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n74h5cfh59ch56chc7h577h568h545hbch8bh9h656h68dh66dhdfh65fh5f5h5f4hcdh549h57dh55ch646h567h85h564h8dh699h679h565h654hbdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slfzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(afc78a65-bfa6-42ff-a84a-f90dd740ffbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:00:00 crc kubenswrapper[4853]: I0127 19:00:00.778690 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:02 crc kubenswrapper[4853]: I0127 19:00:02.572727 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 27 19:00:02 crc kubenswrapper[4853]: I0127 19:00:02.689145 4853 generic.go:334] "Generic (PLEG): container finished" podID="8e865945-20c8-4b2d-a52b-62dd1450181b" containerID="72f8704515accc7e94f5ee597aea700d0822e5bfa077b1dfd32254b94ad59eac" exitCode=0 Jan 27 19:00:02 crc kubenswrapper[4853]: I0127 19:00:02.689197 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9bclj" event={"ID":"8e865945-20c8-4b2d-a52b-62dd1450181b","Type":"ContainerDied","Data":"72f8704515accc7e94f5ee597aea700d0822e5bfa077b1dfd32254b94ad59eac"} Jan 27 19:00:07 crc kubenswrapper[4853]: I0127 19:00:07.573407 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 27 19:00:07 crc kubenswrapper[4853]: I0127 19:00:07.574330 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.652533 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.658788 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9bclj" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.770466 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" event={"ID":"7922b820-92c1-46d8-a5a0-6f58e05674b5","Type":"ContainerDied","Data":"3daa75f666886a5113d69dcf3f3874071d73397973416e61f132d0a8ad84ef17"} Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.770514 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.773254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9bclj" event={"ID":"8e865945-20c8-4b2d-a52b-62dd1450181b","Type":"ContainerDied","Data":"ab49a4e87eec020a273dead26c44c58eec5d09587d812c8fe3b11238807d2947"} Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.773308 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab49a4e87eec020a273dead26c44c58eec5d09587d812c8fe3b11238807d2947" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.773369 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9bclj" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786073 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-sb\") pod \"7922b820-92c1-46d8-a5a0-6f58e05674b5\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786207 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-swift-storage-0\") pod \"7922b820-92c1-46d8-a5a0-6f58e05674b5\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786252 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-svc\") pod \"7922b820-92c1-46d8-a5a0-6f58e05674b5\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-config\") pod \"8e865945-20c8-4b2d-a52b-62dd1450181b\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786459 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-combined-ca-bundle\") pod \"8e865945-20c8-4b2d-a52b-62dd1450181b\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786597 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-nb\") pod \"7922b820-92c1-46d8-a5a0-6f58e05674b5\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786626 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8k4d\" (UniqueName: \"kubernetes.io/projected/7922b820-92c1-46d8-a5a0-6f58e05674b5-kube-api-access-m8k4d\") pod \"7922b820-92c1-46d8-a5a0-6f58e05674b5\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786665 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtgnl\" (UniqueName: \"kubernetes.io/projected/8e865945-20c8-4b2d-a52b-62dd1450181b-kube-api-access-wtgnl\") pod \"8e865945-20c8-4b2d-a52b-62dd1450181b\" (UID: \"8e865945-20c8-4b2d-a52b-62dd1450181b\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.786718 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-config\") pod \"7922b820-92c1-46d8-a5a0-6f58e05674b5\" (UID: \"7922b820-92c1-46d8-a5a0-6f58e05674b5\") " Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.793206 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7922b820-92c1-46d8-a5a0-6f58e05674b5-kube-api-access-m8k4d" (OuterVolumeSpecName: "kube-api-access-m8k4d") pod "7922b820-92c1-46d8-a5a0-6f58e05674b5" (UID: "7922b820-92c1-46d8-a5a0-6f58e05674b5"). InnerVolumeSpecName "kube-api-access-m8k4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.795040 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e865945-20c8-4b2d-a52b-62dd1450181b-kube-api-access-wtgnl" (OuterVolumeSpecName: "kube-api-access-wtgnl") pod "8e865945-20c8-4b2d-a52b-62dd1450181b" (UID: "8e865945-20c8-4b2d-a52b-62dd1450181b"). InnerVolumeSpecName "kube-api-access-wtgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.828093 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e865945-20c8-4b2d-a52b-62dd1450181b" (UID: "8e865945-20c8-4b2d-a52b-62dd1450181b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.847511 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-config" (OuterVolumeSpecName: "config") pod "8e865945-20c8-4b2d-a52b-62dd1450181b" (UID: "8e865945-20c8-4b2d-a52b-62dd1450181b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.858009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7922b820-92c1-46d8-a5a0-6f58e05674b5" (UID: "7922b820-92c1-46d8-a5a0-6f58e05674b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.869828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7922b820-92c1-46d8-a5a0-6f58e05674b5" (UID: "7922b820-92c1-46d8-a5a0-6f58e05674b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.869893 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7922b820-92c1-46d8-a5a0-6f58e05674b5" (UID: "7922b820-92c1-46d8-a5a0-6f58e05674b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.870359 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7922b820-92c1-46d8-a5a0-6f58e05674b5" (UID: "7922b820-92c1-46d8-a5a0-6f58e05674b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.874948 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-config" (OuterVolumeSpecName: "config") pod "7922b820-92c1-46d8-a5a0-6f58e05674b5" (UID: "7922b820-92c1-46d8-a5a0-6f58e05674b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889663 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889705 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8k4d\" (UniqueName: \"kubernetes.io/projected/7922b820-92c1-46d8-a5a0-6f58e05674b5-kube-api-access-m8k4d\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889723 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtgnl\" (UniqueName: \"kubernetes.io/projected/8e865945-20c8-4b2d-a52b-62dd1450181b-kube-api-access-wtgnl\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889736 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889745 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889753 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889762 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7922b820-92c1-46d8-a5a0-6f58e05674b5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889771 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:11 crc kubenswrapper[4853]: I0127 19:00:11.889779 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e865945-20c8-4b2d-a52b-62dd1450181b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:12 crc kubenswrapper[4853]: I0127 19:00:12.153645 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8qfz"] Jan 27 19:00:12 crc kubenswrapper[4853]: I0127 19:00:12.153718 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8qfz"] Jan 27 19:00:12 crc kubenswrapper[4853]: I0127 19:00:12.575286 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-h8qfz" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.049998 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.051210 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8l6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rfjdk_openstack(b1d33900-476d-4c86-a501-4490c01000ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.052591 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rfjdk" podUID="b1d33900-476d-4c86-a501-4490c01000ca" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.075631 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-82msw"] Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.076177 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="init" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.076200 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="init" Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.076223 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e865945-20c8-4b2d-a52b-62dd1450181b" containerName="neutron-db-sync" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.076232 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e865945-20c8-4b2d-a52b-62dd1450181b" containerName="neutron-db-sync" Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.076249 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.076256 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.076437 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e865945-20c8-4b2d-a52b-62dd1450181b" containerName="neutron-db-sync" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.076456 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" containerName="dnsmasq-dns" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.077477 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.088195 4853 scope.go:117] "RemoveContainer" containerID="9a0d5065adb04fe451ec46d7dc537dc2f2c62e4343095f159ac567f8c87756c1" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.119468 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-82msw"] Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.178052 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69cd5c4bb8-2fh98"] Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.180161 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.186841 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69cd5c4bb8-2fh98"] Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.187231 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.187505 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-25lg4" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.187910 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.188016 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.221398 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.221460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.221735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.222070 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6sng\" (UniqueName: \"kubernetes.io/projected/07e83bfd-c7f7-4795-9ae3-81a358092c4e-kube-api-access-q6sng\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.222525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.222766 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-config\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325389 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-config\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-config\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325539 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-httpd-config\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325578 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325596 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325633 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-combined-ca-bundle\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6sng\" (UniqueName: \"kubernetes.io/projected/07e83bfd-c7f7-4795-9ae3-81a358092c4e-kube-api-access-q6sng\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325745 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-ovndb-tls-certs\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325772 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.325793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzccl\" (UniqueName: \"kubernetes.io/projected/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-kube-api-access-gzccl\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.329861 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-config\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.330218 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.330220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.330557 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.330580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.349284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6sng\" (UniqueName: \"kubernetes.io/projected/07e83bfd-c7f7-4795-9ae3-81a358092c4e-kube-api-access-q6sng\") pod \"dnsmasq-dns-55f844cf75-82msw\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.428001 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-combined-ca-bundle\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.428104 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-ovndb-tls-certs\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.428150 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzccl\" (UniqueName: \"kubernetes.io/projected/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-kube-api-access-gzccl\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.428172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-config\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.428226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-httpd-config\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.433453 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-httpd-config\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.433880 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-ovndb-tls-certs\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.434088 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-combined-ca-bundle\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.438466 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-config\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.452348 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzccl\" (UniqueName: \"kubernetes.io/projected/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-kube-api-access-gzccl\") pod \"neutron-69cd5c4bb8-2fh98\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.491602 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.511028 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:13 crc kubenswrapper[4853]: E0127 19:00:13.815849 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rfjdk" podUID="b1d33900-476d-4c86-a501-4490c01000ca" Jan 27 19:00:13 crc kubenswrapper[4853]: I0127 19:00:13.910053 4853 scope.go:117] "RemoveContainer" containerID="ff9e4fd4a34ff59577d435a0bf8d7a19850256a7d26cf78d9df1648705263513" Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.067675 4853 scope.go:117] "RemoveContainer" containerID="727e3bd34178cd0a74c35dab1daaab86879e94a8f0d557ea63af726811459935" Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.176728 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7922b820-92c1-46d8-a5a0-6f58e05674b5" path="/var/lib/kubelet/pods/7922b820-92c1-46d8-a5a0-6f58e05674b5/volumes" Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.242091 4853 scope.go:117] "RemoveContainer" containerID="18efbdd72c9acab9bb286b14193bfa5e0ac7ffc5bc930f0495a2906166d13a1e" Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.322188 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69967664fb-pbqhr"] Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.360266 4853 scope.go:117] "RemoveContainer" containerID="f663b0207f40dfac7950dc5d929690911d398ea74cc5358b75e1ec1e77b5c37b" Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.374433 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c78c8d4f6-bchzm"] Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.400966 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-928cv"] Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.761345 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99"] Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.827983 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69967664fb-pbqhr" event={"ID":"66d621f7-387b-470d-8e42-bebbfada3bbc","Type":"ContainerStarted","Data":"0318308c7e074a8d226d995b71caa79bddd32debadb290b9ae02f231727b633a"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.831597 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-82msw"] Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.833266 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b7gbn" event={"ID":"8ae89dc3-4a08-42bd-a234-b5e8f948dc23","Type":"ContainerStarted","Data":"e4505d967429000bedd61103c44aeb6c70797166282a33d15cab46e18f4ac744"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.865387 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-b7gbn" podStartSLOduration=6.098349135 podStartE2EDuration="38.865367409s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:38.778736038 +0000 UTC m=+1021.241278921" lastFinishedPulling="2026-01-27 19:00:11.545754312 +0000 UTC m=+1054.008297195" observedRunningTime="2026-01-27 19:00:14.848108619 +0000 UTC m=+1057.310651502" watchObservedRunningTime="2026-01-27 19:00:14.865367409 +0000 UTC m=+1057.327910292" Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.868674 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerStarted","Data":"0c718b1ee29e92b02c677d336336d3b0e5835da55281777ff39e4f1a10cd46ef"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.875202 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-928cv" event={"ID":"e0dddcf5-0747-4132-b14f-f67160ca5f27","Type":"ContainerStarted","Data":"4a6aeb2682c85fbfce7e42f57cf794d0e315f736abe722a09af9a227cc0936b9"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.879863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbbdbb6d9-g899w" event={"ID":"e556ea12-6992-4aba-be03-e6d4a2823b74","Type":"ContainerStarted","Data":"70a07f68d1290bc4fdd19f8574adc1330f703630dc442e020d19ba65038dbd43"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.881401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" event={"ID":"b75d2295-47d6-44cb-b492-f2f84fcb7964","Type":"ContainerStarted","Data":"94155fdf703980df3ece0f9e02bcfead549114af79604200de43f4f96007d6f0"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.891521 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerStarted","Data":"3de4ac5d0457c3a7e26b839a8d6399efd45f6b550477e5205068db8eeadb4c33"} Jan 27 19:00:14 crc kubenswrapper[4853]: I0127 19:00:14.945956 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.054561 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69cd5c4bb8-2fh98"] Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.437984 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.712921 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54879c9777-tvw4r"] Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.725243 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.727449 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54879c9777-tvw4r"] Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.735377 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.736682 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.909471 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerStarted","Data":"bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.915058 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cd5c4bb8-2fh98" event={"ID":"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e","Type":"ContainerStarted","Data":"ba7d764f9c81bdf6b5c692b9f74ffee844d4bd5782a91a6d3db4fa24a1556773"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.916916 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5stq\" (UniqueName: \"kubernetes.io/projected/cd236b6c-6a86-4c6a-8e4a-f2a459943780-kube-api-access-p5stq\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.916956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-ovndb-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.916994 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-public-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.917032 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-config\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.917061 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-combined-ca-bundle\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.917110 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-httpd-config\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.917135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-82msw" event={"ID":"07e83bfd-c7f7-4795-9ae3-81a358092c4e","Type":"ContainerStarted","Data":"587eef84ec7d6ca3faccae6a398d572ff90b7f017a84b9ec04824571d58b3722"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.917192 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-internal-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.919615 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-928cv" event={"ID":"e0dddcf5-0747-4132-b14f-f67160ca5f27","Type":"ContainerStarted","Data":"91ec58dd7d60be51dc68e2c014bbea8e37215eaabc39edde7c5169a3427f51fa"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.927863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbbdbb6d9-g899w" event={"ID":"e556ea12-6992-4aba-be03-e6d4a2823b74","Type":"ContainerStarted","Data":"7b9e25dd8951aba8d4aae2d3a6fcc8cb21e0080a8226a4815f40343ffe16e9da"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.928045 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5dbbdbb6d9-g899w" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon-log" containerID="cri-o://70a07f68d1290bc4fdd19f8574adc1330f703630dc442e020d19ba65038dbd43" gracePeriod=30 Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.928212 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5dbbdbb6d9-g899w" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon" containerID="cri-o://7b9e25dd8951aba8d4aae2d3a6fcc8cb21e0080a8226a4815f40343ffe16e9da" gracePeriod=30 Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.942433 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-928cv" podStartSLOduration=17.942411923999998 podStartE2EDuration="17.942411924s" podCreationTimestamp="2026-01-27 18:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:15.940021826 +0000 UTC m=+1058.402564709" watchObservedRunningTime="2026-01-27 19:00:15.942411924 +0000 UTC m=+1058.404954797" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.963041 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549f7c8989-k8qw5" event={"ID":"9c93d763-a677-4df8-9846-5fa96f76e0ab","Type":"ContainerStarted","Data":"92dff0c5224d3dee5827961bc46f8c7f5d4896f42ebb8d0818620327c2d3f37d"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.967925 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5dbbdbb6d9-g899w" podStartSLOduration=4.679047804 podStartE2EDuration="38.967904298s" podCreationTimestamp="2026-01-27 18:59:37 +0000 UTC" firstStartedPulling="2026-01-27 18:59:39.326816616 +0000 UTC m=+1021.789359499" lastFinishedPulling="2026-01-27 19:00:13.61567311 +0000 UTC m=+1056.078215993" observedRunningTime="2026-01-27 19:00:15.966252011 +0000 UTC m=+1058.428794894" watchObservedRunningTime="2026-01-27 19:00:15.967904298 +0000 UTC m=+1058.430447171" Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.982002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"383a8cff-14ac-4c26-a428-302b30622b4b","Type":"ContainerStarted","Data":"555f621cc34e0d6a8e0dd7fbcc487b80a691655790a0d273a8588d1355d9f9fe"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.984789 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69967664fb-pbqhr" event={"ID":"66d621f7-387b-470d-8e42-bebbfada3bbc","Type":"ContainerStarted","Data":"ea0acb50f875a3fec35e07abfe3614ef21b8e8fb2e0c5a35a700b2010eab6ffb"} Jan 27 19:00:15 crc kubenswrapper[4853]: I0127 19:00:15.999277 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nzrdc" event={"ID":"3bb20c48-23bc-4c0d-92de-f87015fac932","Type":"ContainerStarted","Data":"cc8940826dd753b567ef87e5a883ab069a721e6a3fd23c08bae813a1448c0845"} Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.004050 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8b67f5cc-gmbgv" event={"ID":"b62f23c7-d81a-4925-a2c3-10c410912a0f","Type":"ContainerStarted","Data":"d73065e76bd2c3efa6b0c46086cd6fad7f71858b527127f5594a43afa42ab84a"} Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.011317 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1447491c-5e5b-412d-9cbd-b7bdc9a87797","Type":"ContainerStarted","Data":"dc6ed92a4b05405666e0620e223a526c65a4c3711a55ee5a1c1405958007fd56"} Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.020706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-internal-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.020869 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5stq\" (UniqueName: \"kubernetes.io/projected/cd236b6c-6a86-4c6a-8e4a-f2a459943780-kube-api-access-p5stq\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.020909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-ovndb-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.020963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-public-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.021022 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-config\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.021068 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-combined-ca-bundle\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.021103 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-httpd-config\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.023057 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nzrdc" podStartSLOduration=5.309188132 podStartE2EDuration="40.023037165s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:39.215405677 +0000 UTC m=+1021.677948560" lastFinishedPulling="2026-01-27 19:00:13.92925471 +0000 UTC m=+1056.391797593" observedRunningTime="2026-01-27 19:00:16.022677965 +0000 UTC m=+1058.485220838" watchObservedRunningTime="2026-01-27 19:00:16.023037165 +0000 UTC m=+1058.485580048" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.046330 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-config\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.047474 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-combined-ca-bundle\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.048247 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-ovndb-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.070098 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-httpd-config\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.075457 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-internal-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.078001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-public-tls-certs\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.081778 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5stq\" (UniqueName: \"kubernetes.io/projected/cd236b6c-6a86-4c6a-8e4a-f2a459943780-kube-api-access-p5stq\") pod \"neutron-54879c9777-tvw4r\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:16 crc kubenswrapper[4853]: I0127 19:00:16.358839 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.056174 4853 generic.go:334] "Generic (PLEG): container finished" podID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerID="2430fcf03b8a43a49f359548627efbb5f0eb110307f1bf47ac68a1185e82e3bc" exitCode=0 Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.056698 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-82msw" event={"ID":"07e83bfd-c7f7-4795-9ae3-81a358092c4e","Type":"ContainerDied","Data":"2430fcf03b8a43a49f359548627efbb5f0eb110307f1bf47ac68a1185e82e3bc"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.085051 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69967664fb-pbqhr" event={"ID":"66d621f7-387b-470d-8e42-bebbfada3bbc","Type":"ContainerStarted","Data":"3e74acd3091e36c067f9363770b8672147f720192965c341275042fc68c2d916"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.117796 4853 generic.go:334] "Generic (PLEG): container finished" podID="b75d2295-47d6-44cb-b492-f2f84fcb7964" containerID="131a1e60eadf9f011dfb1dde545bd245105f1afecfb6695769a430c4df77d3c2" exitCode=0 Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.117898 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" event={"ID":"b75d2295-47d6-44cb-b492-f2f84fcb7964","Type":"ContainerDied","Data":"131a1e60eadf9f011dfb1dde545bd245105f1afecfb6695769a430c4df77d3c2"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.136745 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549f7c8989-k8qw5" event={"ID":"9c93d763-a677-4df8-9846-5fa96f76e0ab","Type":"ContainerStarted","Data":"5339ebcd43ec79faa6c17b919a8bb5ed81f10743f2d6a8ba151e39c4224c95d5"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.137029 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-549f7c8989-k8qw5" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon-log" containerID="cri-o://92dff0c5224d3dee5827961bc46f8c7f5d4896f42ebb8d0818620327c2d3f37d" gracePeriod=30 Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.137178 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-549f7c8989-k8qw5" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon" containerID="cri-o://5339ebcd43ec79faa6c17b919a8bb5ed81f10743f2d6a8ba151e39c4224c95d5" gracePeriod=30 Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.165195 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54879c9777-tvw4r"] Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.180554 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8b67f5cc-gmbgv" event={"ID":"b62f23c7-d81a-4925-a2c3-10c410912a0f","Type":"ContainerStarted","Data":"bf6e6ce6ee31833377bad084f3c815870cd11b10798c0f3681f38bf6757a0e13"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.182708 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69967664fb-pbqhr" podStartSLOduration=31.182689427 podStartE2EDuration="31.182689427s" podCreationTimestamp="2026-01-27 18:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:17.143115722 +0000 UTC m=+1059.605658605" watchObservedRunningTime="2026-01-27 19:00:17.182689427 +0000 UTC m=+1059.645232310" Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.184386 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c8b67f5cc-gmbgv" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon-log" containerID="cri-o://d73065e76bd2c3efa6b0c46086cd6fad7f71858b527127f5594a43afa42ab84a" gracePeriod=30 Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.185235 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c8b67f5cc-gmbgv" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon" containerID="cri-o://bf6e6ce6ee31833377bad084f3c815870cd11b10798c0f3681f38bf6757a0e13" gracePeriod=30 Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.228147 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerStarted","Data":"5eb06c79644ed85292d51f529bc88f05f3d36c0c73a7d7ccd7b435ebbe58e251"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.233520 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1447491c-5e5b-412d-9cbd-b7bdc9a87797","Type":"ContainerStarted","Data":"8a4cb28c4954303191103cf0fab536e482dde2773b7cbbd9a5dd9878f87ab734"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.237402 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-549f7c8989-k8qw5" podStartSLOduration=7.677606518 podStartE2EDuration="37.237388041s" podCreationTimestamp="2026-01-27 18:59:40 +0000 UTC" firstStartedPulling="2026-01-27 18:59:41.983848509 +0000 UTC m=+1024.446391392" lastFinishedPulling="2026-01-27 19:00:11.543630012 +0000 UTC m=+1054.006172915" observedRunningTime="2026-01-27 19:00:17.200638687 +0000 UTC m=+1059.663181570" watchObservedRunningTime="2026-01-27 19:00:17.237388041 +0000 UTC m=+1059.699930914" Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.244967 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c8b67f5cc-gmbgv" podStartSLOduration=6.076494308 podStartE2EDuration="41.244948516s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:38.446570913 +0000 UTC m=+1020.909113796" lastFinishedPulling="2026-01-27 19:00:13.615025121 +0000 UTC m=+1056.077568004" observedRunningTime="2026-01-27 19:00:17.227591073 +0000 UTC m=+1059.690133976" watchObservedRunningTime="2026-01-27 19:00:17.244948516 +0000 UTC m=+1059.707491399" Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.248378 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cd5c4bb8-2fh98" event={"ID":"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e","Type":"ContainerStarted","Data":"e12c891a4c8bae88241f3b7296bf145e29b7ea5714735b855be25c79bc090b80"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.248409 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.248422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cd5c4bb8-2fh98" event={"ID":"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e","Type":"ContainerStarted","Data":"f4050b05966aa33c4c1aec74cb852056999cd917161316ddc643ab32e9bf7a45"} Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.280908 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c78c8d4f6-bchzm" podStartSLOduration=32.280890607 podStartE2EDuration="32.280890607s" podCreationTimestamp="2026-01-27 18:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:17.27886861 +0000 UTC m=+1059.741411493" watchObservedRunningTime="2026-01-27 19:00:17.280890607 +0000 UTC m=+1059.743433490" Jan 27 19:00:17 crc kubenswrapper[4853]: I0127 19:00:17.322172 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69cd5c4bb8-2fh98" podStartSLOduration=4.322148009 podStartE2EDuration="4.322148009s" podCreationTimestamp="2026-01-27 19:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:17.312592138 +0000 UTC m=+1059.775135021" watchObservedRunningTime="2026-01-27 19:00:17.322148009 +0000 UTC m=+1059.784690892" Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.215539 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.309984 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1447491c-5e5b-412d-9cbd-b7bdc9a87797","Type":"ContainerStarted","Data":"f6bdaca7c5c803233f482527f300d355910196b8505907012cce304a5b68d07b"} Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.316299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-82msw" event={"ID":"07e83bfd-c7f7-4795-9ae3-81a358092c4e","Type":"ContainerStarted","Data":"b536d9286a81735ddefe0c7015afc49f34eae1789f194627a8a052ba184b8bd1"} Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.317041 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.347620 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.347595167 podStartE2EDuration="23.347595167s" podCreationTimestamp="2026-01-27 18:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:18.346859196 +0000 UTC m=+1060.809402089" watchObservedRunningTime="2026-01-27 19:00:18.347595167 +0000 UTC m=+1060.810138050" Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.371620 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"383a8cff-14ac-4c26-a428-302b30622b4b","Type":"ContainerStarted","Data":"9f9c738e714425b0c66056ff127e63e1640ef6cefb1e2f509c89157dea2e92d3"} Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.385566 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54879c9777-tvw4r" event={"ID":"cd236b6c-6a86-4c6a-8e4a-f2a459943780","Type":"ContainerStarted","Data":"8b2023bd66ff9349a1b712e8efedede50d982e90a208c380d91ce33701b15ba0"} Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.385625 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54879c9777-tvw4r" event={"ID":"cd236b6c-6a86-4c6a-8e4a-f2a459943780","Type":"ContainerStarted","Data":"6d2821afd51a901fc78cd2685c2893d8e03f8f6ef75d9a1a92542b733a0954b5"} Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.387964 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-82msw" podStartSLOduration=6.387937293 podStartE2EDuration="6.387937293s" podCreationTimestamp="2026-01-27 19:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:18.379160234 +0000 UTC m=+1060.841703117" watchObservedRunningTime="2026-01-27 19:00:18.387937293 +0000 UTC m=+1060.850480176" Jan 27 19:00:18 crc kubenswrapper[4853]: I0127 19:00:18.926995 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.051925 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b75d2295-47d6-44cb-b492-f2f84fcb7964-secret-volume\") pod \"b75d2295-47d6-44cb-b492-f2f84fcb7964\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.052073 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b75d2295-47d6-44cb-b492-f2f84fcb7964-config-volume\") pod \"b75d2295-47d6-44cb-b492-f2f84fcb7964\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.052144 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6cxt\" (UniqueName: \"kubernetes.io/projected/b75d2295-47d6-44cb-b492-f2f84fcb7964-kube-api-access-d6cxt\") pod \"b75d2295-47d6-44cb-b492-f2f84fcb7964\" (UID: \"b75d2295-47d6-44cb-b492-f2f84fcb7964\") " Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.054112 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75d2295-47d6-44cb-b492-f2f84fcb7964-config-volume" (OuterVolumeSpecName: "config-volume") pod "b75d2295-47d6-44cb-b492-f2f84fcb7964" (UID: "b75d2295-47d6-44cb-b492-f2f84fcb7964"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.092588 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75d2295-47d6-44cb-b492-f2f84fcb7964-kube-api-access-d6cxt" (OuterVolumeSpecName: "kube-api-access-d6cxt") pod "b75d2295-47d6-44cb-b492-f2f84fcb7964" (UID: "b75d2295-47d6-44cb-b492-f2f84fcb7964"). InnerVolumeSpecName "kube-api-access-d6cxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.109457 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b75d2295-47d6-44cb-b492-f2f84fcb7964-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b75d2295-47d6-44cb-b492-f2f84fcb7964" (UID: "b75d2295-47d6-44cb-b492-f2f84fcb7964"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.158157 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b75d2295-47d6-44cb-b492-f2f84fcb7964-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.158230 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b75d2295-47d6-44cb-b492-f2f84fcb7964-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.158246 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6cxt\" (UniqueName: \"kubernetes.io/projected/b75d2295-47d6-44cb-b492-f2f84fcb7964-kube-api-access-d6cxt\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.400373 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"383a8cff-14ac-4c26-a428-302b30622b4b","Type":"ContainerStarted","Data":"341c1171cf83f1fd83885a8f5b5ac25bcd09d3f0b8fc03543e699f18bf59ad2e"} Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.403203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54879c9777-tvw4r" event={"ID":"cd236b6c-6a86-4c6a-8e4a-f2a459943780","Type":"ContainerStarted","Data":"aa91b36a604ab59007d1904b22f3ee35da626ca5979dec847a44d9bd89b48c9a"} Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.403414 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.406202 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.406568 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99" event={"ID":"b75d2295-47d6-44cb-b492-f2f84fcb7964","Type":"ContainerDied","Data":"94155fdf703980df3ece0f9e02bcfead549114af79604200de43f4f96007d6f0"} Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.406596 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94155fdf703980df3ece0f9e02bcfead549114af79604200de43f4f96007d6f0" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.433867 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.433823642 podStartE2EDuration="24.433823642s" podCreationTimestamp="2026-01-27 18:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:19.423005265 +0000 UTC m=+1061.885548158" watchObservedRunningTime="2026-01-27 19:00:19.433823642 +0000 UTC m=+1061.896366525" Jan 27 19:00:19 crc kubenswrapper[4853]: I0127 19:00:19.449271 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54879c9777-tvw4r" podStartSLOduration=4.449251141 podStartE2EDuration="4.449251141s" podCreationTimestamp="2026-01-27 19:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:19.44396035 +0000 UTC m=+1061.906503233" watchObservedRunningTime="2026-01-27 19:00:19.449251141 +0000 UTC m=+1061.911794024" Jan 27 19:00:20 crc kubenswrapper[4853]: I0127 19:00:20.418873 4853 generic.go:334] "Generic (PLEG): container finished" podID="3bb20c48-23bc-4c0d-92de-f87015fac932" containerID="cc8940826dd753b567ef87e5a883ab069a721e6a3fd23c08bae813a1448c0845" exitCode=0 Jan 27 19:00:20 crc kubenswrapper[4853]: I0127 19:00:20.420002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nzrdc" event={"ID":"3bb20c48-23bc-4c0d-92de-f87015fac932","Type":"ContainerDied","Data":"cc8940826dd753b567ef87e5a883ab069a721e6a3fd23c08bae813a1448c0845"} Jan 27 19:00:21 crc kubenswrapper[4853]: I0127 19:00:21.250155 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 19:00:21 crc kubenswrapper[4853]: I0127 19:00:21.462357 4853 generic.go:334] "Generic (PLEG): container finished" podID="8ae89dc3-4a08-42bd-a234-b5e8f948dc23" containerID="e4505d967429000bedd61103c44aeb6c70797166282a33d15cab46e18f4ac744" exitCode=0 Jan 27 19:00:21 crc kubenswrapper[4853]: I0127 19:00:21.462503 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b7gbn" event={"ID":"8ae89dc3-4a08-42bd-a234-b5e8f948dc23","Type":"ContainerDied","Data":"e4505d967429000bedd61103c44aeb6c70797166282a33d15cab46e18f4ac744"} Jan 27 19:00:21 crc kubenswrapper[4853]: I0127 19:00:21.467851 4853 generic.go:334] "Generic (PLEG): container finished" podID="e0dddcf5-0747-4132-b14f-f67160ca5f27" containerID="91ec58dd7d60be51dc68e2c014bbea8e37215eaabc39edde7c5169a3427f51fa" exitCode=0 Jan 27 19:00:21 crc kubenswrapper[4853]: I0127 19:00:21.467956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-928cv" event={"ID":"e0dddcf5-0747-4132-b14f-f67160ca5f27","Type":"ContainerDied","Data":"91ec58dd7d60be51dc68e2c014bbea8e37215eaabc39edde7c5169a3427f51fa"} Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.494450 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.499453 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nzrdc" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.509055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nzrdc" event={"ID":"3bb20c48-23bc-4c0d-92de-f87015fac932","Type":"ContainerDied","Data":"7042937814d05dd4548b47cedcb0095aa57f178b542b734ff19a928621d6519d"} Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.509161 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7042937814d05dd4548b47cedcb0095aa57f178b542b734ff19a928621d6519d" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.509192 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nzrdc" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.556459 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-scripts\") pod \"3bb20c48-23bc-4c0d-92de-f87015fac932\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.556595 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-config-data\") pod \"3bb20c48-23bc-4c0d-92de-f87015fac932\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.556657 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-combined-ca-bundle\") pod \"3bb20c48-23bc-4c0d-92de-f87015fac932\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.556735 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb20c48-23bc-4c0d-92de-f87015fac932-logs\") pod \"3bb20c48-23bc-4c0d-92de-f87015fac932\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.556774 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr9k7\" (UniqueName: \"kubernetes.io/projected/3bb20c48-23bc-4c0d-92de-f87015fac932-kube-api-access-sr9k7\") pod \"3bb20c48-23bc-4c0d-92de-f87015fac932\" (UID: \"3bb20c48-23bc-4c0d-92de-f87015fac932\") " Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.561620 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb20c48-23bc-4c0d-92de-f87015fac932-logs" (OuterVolumeSpecName: "logs") pod "3bb20c48-23bc-4c0d-92de-f87015fac932" (UID: "3bb20c48-23bc-4c0d-92de-f87015fac932"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.568247 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-scripts" (OuterVolumeSpecName: "scripts") pod "3bb20c48-23bc-4c0d-92de-f87015fac932" (UID: "3bb20c48-23bc-4c0d-92de-f87015fac932"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.578371 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb20c48-23bc-4c0d-92de-f87015fac932-kube-api-access-sr9k7" (OuterVolumeSpecName: "kube-api-access-sr9k7") pod "3bb20c48-23bc-4c0d-92de-f87015fac932" (UID: "3bb20c48-23bc-4c0d-92de-f87015fac932"). InnerVolumeSpecName "kube-api-access-sr9k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.610169 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bb20c48-23bc-4c0d-92de-f87015fac932" (UID: "3bb20c48-23bc-4c0d-92de-f87015fac932"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.626484 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-snr4r"] Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.626781 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="dnsmasq-dns" containerID="cri-o://afc470f3509df01b0e202009191a6d8823df4dba071a2a16b5568dfa46544a66" gracePeriod=10 Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.637548 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-config-data" (OuterVolumeSpecName: "config-data") pod "3bb20c48-23bc-4c0d-92de-f87015fac932" (UID: "3bb20c48-23bc-4c0d-92de-f87015fac932"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.667500 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.667533 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.667542 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb20c48-23bc-4c0d-92de-f87015fac932-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.667553 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb20c48-23bc-4c0d-92de-f87015fac932-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4853]: I0127 19:00:23.667562 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr9k7\" (UniqueName: \"kubernetes.io/projected/3bb20c48-23bc-4c0d-92de-f87015fac932-kube-api-access-sr9k7\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.519087 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d63a071-50d0-4387-a817-9d65506ac62b" containerID="afc470f3509df01b0e202009191a6d8823df4dba071a2a16b5568dfa46544a66" exitCode=0 Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.519746 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" event={"ID":"6d63a071-50d0-4387-a817-9d65506ac62b","Type":"ContainerDied","Data":"afc470f3509df01b0e202009191a6d8823df4dba071a2a16b5568dfa46544a66"} Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.639475 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f5866f968-d652z"] Jan 27 19:00:24 crc kubenswrapper[4853]: E0127 19:00:24.639849 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb20c48-23bc-4c0d-92de-f87015fac932" containerName="placement-db-sync" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.639866 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb20c48-23bc-4c0d-92de-f87015fac932" containerName="placement-db-sync" Jan 27 19:00:24 crc kubenswrapper[4853]: E0127 19:00:24.639894 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75d2295-47d6-44cb-b492-f2f84fcb7964" containerName="collect-profiles" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.639901 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d2295-47d6-44cb-b492-f2f84fcb7964" containerName="collect-profiles" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.640080 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75d2295-47d6-44cb-b492-f2f84fcb7964" containerName="collect-profiles" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.640132 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb20c48-23bc-4c0d-92de-f87015fac932" containerName="placement-db-sync" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.641012 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.643923 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.643926 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.643949 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-st74w" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.643991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.644617 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.666958 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f5866f968-d652z"] Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695321 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-scripts\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695389 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-config-data\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695422 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-combined-ca-bundle\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695457 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ce3654-e156-4fa9-9399-3824ff16a228-logs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695507 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-public-tls-certs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695548 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxwz\" (UniqueName: \"kubernetes.io/projected/81ce3654-e156-4fa9-9399-3824ff16a228-kube-api-access-5bxwz\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.695570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-internal-tls-certs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.796946 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxwz\" (UniqueName: \"kubernetes.io/projected/81ce3654-e156-4fa9-9399-3824ff16a228-kube-api-access-5bxwz\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.796997 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-internal-tls-certs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.797073 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-scripts\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.797101 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-config-data\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.797133 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-combined-ca-bundle\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.797158 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ce3654-e156-4fa9-9399-3824ff16a228-logs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.797206 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-public-tls-certs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.797894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ce3654-e156-4fa9-9399-3824ff16a228-logs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.813138 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-combined-ca-bundle\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.813836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-public-tls-certs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.815536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-internal-tls-certs\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.816041 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxwz\" (UniqueName: \"kubernetes.io/projected/81ce3654-e156-4fa9-9399-3824ff16a228-kube-api-access-5bxwz\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.816838 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-config-data\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.818420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ce3654-e156-4fa9-9399-3824ff16a228-scripts\") pod \"placement-f5866f968-d652z\" (UID: \"81ce3654-e156-4fa9-9399-3824ff16a228\") " pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:24 crc kubenswrapper[4853]: I0127 19:00:24.966673 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.095004 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.095532 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.095543 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.095554 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.110675 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.110741 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.110766 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.110779 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.146453 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.159533 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.169087 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.179493 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.334417 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.334476 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.337136 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.641089 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.641167 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.642879 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69967664fb-pbqhr" podUID="66d621f7-387b-470d-8e42-bebbfada3bbc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 19:00:26 crc kubenswrapper[4853]: I0127 19:00:26.817280 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 19:00:27 crc kubenswrapper[4853]: I0127 19:00:27.558838 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.317611 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-928cv" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.366199 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b7gbn" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.480943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz552\" (UniqueName: \"kubernetes.io/projected/e0dddcf5-0747-4132-b14f-f67160ca5f27-kube-api-access-rz552\") pod \"e0dddcf5-0747-4132-b14f-f67160ca5f27\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.481474 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-db-sync-config-data\") pod \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486285 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-credential-keys\") pod \"e0dddcf5-0747-4132-b14f-f67160ca5f27\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-config-data\") pod \"e0dddcf5-0747-4132-b14f-f67160ca5f27\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486392 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-combined-ca-bundle\") pod \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486427 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-scripts\") pod \"e0dddcf5-0747-4132-b14f-f67160ca5f27\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486561 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-fernet-keys\") pod \"e0dddcf5-0747-4132-b14f-f67160ca5f27\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486590 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-combined-ca-bundle\") pod \"e0dddcf5-0747-4132-b14f-f67160ca5f27\" (UID: \"e0dddcf5-0747-4132-b14f-f67160ca5f27\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.486665 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nztcc\" (UniqueName: \"kubernetes.io/projected/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-kube-api-access-nztcc\") pod \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\" (UID: \"8ae89dc3-4a08-42bd-a234-b5e8f948dc23\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.488808 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0dddcf5-0747-4132-b14f-f67160ca5f27-kube-api-access-rz552" (OuterVolumeSpecName: "kube-api-access-rz552") pod "e0dddcf5-0747-4132-b14f-f67160ca5f27" (UID: "e0dddcf5-0747-4132-b14f-f67160ca5f27"). InnerVolumeSpecName "kube-api-access-rz552". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.504806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8ae89dc3-4a08-42bd-a234-b5e8f948dc23" (UID: "8ae89dc3-4a08-42bd-a234-b5e8f948dc23"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.505504 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e0dddcf5-0747-4132-b14f-f67160ca5f27" (UID: "e0dddcf5-0747-4132-b14f-f67160ca5f27"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.507572 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e0dddcf5-0747-4132-b14f-f67160ca5f27" (UID: "e0dddcf5-0747-4132-b14f-f67160ca5f27"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.512332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-kube-api-access-nztcc" (OuterVolumeSpecName: "kube-api-access-nztcc") pod "8ae89dc3-4a08-42bd-a234-b5e8f948dc23" (UID: "8ae89dc3-4a08-42bd-a234-b5e8f948dc23"). InnerVolumeSpecName "kube-api-access-nztcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.539341 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-scripts" (OuterVolumeSpecName: "scripts") pod "e0dddcf5-0747-4132-b14f-f67160ca5f27" (UID: "e0dddcf5-0747-4132-b14f-f67160ca5f27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.567267 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-config-data" (OuterVolumeSpecName: "config-data") pod "e0dddcf5-0747-4132-b14f-f67160ca5f27" (UID: "e0dddcf5-0747-4132-b14f-f67160ca5f27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.567957 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0dddcf5-0747-4132-b14f-f67160ca5f27" (UID: "e0dddcf5-0747-4132-b14f-f67160ca5f27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.576411 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ae89dc3-4a08-42bd-a234-b5e8f948dc23" (UID: "8ae89dc3-4a08-42bd-a234-b5e8f948dc23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589615 4853 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589667 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589682 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589697 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589709 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589720 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dddcf5-0747-4132-b14f-f67160ca5f27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589731 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nztcc\" (UniqueName: \"kubernetes.io/projected/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-kube-api-access-nztcc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589747 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz552\" (UniqueName: \"kubernetes.io/projected/e0dddcf5-0747-4132-b14f-f67160ca5f27-kube-api-access-rz552\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.589758 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ae89dc3-4a08-42bd-a234-b5e8f948dc23-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.646649 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-928cv" event={"ID":"e0dddcf5-0747-4132-b14f-f67160ca5f27","Type":"ContainerDied","Data":"4a6aeb2682c85fbfce7e42f57cf794d0e315f736abe722a09af9a227cc0936b9"} Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.646717 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a6aeb2682c85fbfce7e42f57cf794d0e315f736abe722a09af9a227cc0936b9" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.646812 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-928cv" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.668521 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.687583 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b7gbn" event={"ID":"8ae89dc3-4a08-42bd-a234-b5e8f948dc23","Type":"ContainerDied","Data":"7245d827850f957f8f9c822bed4b84cf235312a73f07aebd24c35196310a0cc2"} Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.687646 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7245d827850f957f8f9c822bed4b84cf235312a73f07aebd24c35196310a0cc2" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.687871 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b7gbn" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.760398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerStarted","Data":"07c01fa323e543321032a669c321880305b06ed4ab384d27ea8a800bdbcdc348"} Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.807186 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-config\") pod \"6d63a071-50d0-4387-a817-9d65506ac62b\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.808798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gksjl\" (UniqueName: \"kubernetes.io/projected/6d63a071-50d0-4387-a817-9d65506ac62b-kube-api-access-gksjl\") pod \"6d63a071-50d0-4387-a817-9d65506ac62b\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.808949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-sb\") pod \"6d63a071-50d0-4387-a817-9d65506ac62b\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.809192 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-nb\") pod \"6d63a071-50d0-4387-a817-9d65506ac62b\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.809298 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-svc\") pod \"6d63a071-50d0-4387-a817-9d65506ac62b\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.809464 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-swift-storage-0\") pod \"6d63a071-50d0-4387-a817-9d65506ac62b\" (UID: \"6d63a071-50d0-4387-a817-9d65506ac62b\") " Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.831733 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d63a071-50d0-4387-a817-9d65506ac62b-kube-api-access-gksjl" (OuterVolumeSpecName: "kube-api-access-gksjl") pod "6d63a071-50d0-4387-a817-9d65506ac62b" (UID: "6d63a071-50d0-4387-a817-9d65506ac62b"). InnerVolumeSpecName "kube-api-access-gksjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.869166 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f5866f968-d652z"] Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.875144 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d63a071-50d0-4387-a817-9d65506ac62b" (UID: "6d63a071-50d0-4387-a817-9d65506ac62b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: W0127 19:00:28.889462 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ce3654_e156_4fa9_9399_3824ff16a228.slice/crio-1871ef9c0d73c26d5a70aca7d164c1d4e6f9f359b0c6ab157dcb31ad80d3c223 WatchSource:0}: Error finding container 1871ef9c0d73c26d5a70aca7d164c1d4e6f9f359b0c6ab157dcb31ad80d3c223: Status 404 returned error can't find the container with id 1871ef9c0d73c26d5a70aca7d164c1d4e6f9f359b0c6ab157dcb31ad80d3c223 Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.903249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-config" (OuterVolumeSpecName: "config") pod "6d63a071-50d0-4387-a817-9d65506ac62b" (UID: "6d63a071-50d0-4387-a817-9d65506ac62b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.911839 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.911878 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gksjl\" (UniqueName: \"kubernetes.io/projected/6d63a071-50d0-4387-a817-9d65506ac62b-kube-api-access-gksjl\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.911893 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.917610 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6d63a071-50d0-4387-a817-9d65506ac62b" (UID: "6d63a071-50d0-4387-a817-9d65506ac62b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.948698 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d63a071-50d0-4387-a817-9d65506ac62b" (UID: "6d63a071-50d0-4387-a817-9d65506ac62b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:28 crc kubenswrapper[4853]: I0127 19:00:28.965887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d63a071-50d0-4387-a817-9d65506ac62b" (UID: "6d63a071-50d0-4387-a817-9d65506ac62b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.015149 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.015225 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.015239 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d63a071-50d0-4387-a817-9d65506ac62b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.524515 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54f5975d7b-jvtmz"] Jan 27 19:00:29 crc kubenswrapper[4853]: E0127 19:00:29.525646 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0dddcf5-0747-4132-b14f-f67160ca5f27" containerName="keystone-bootstrap" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.525731 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0dddcf5-0747-4132-b14f-f67160ca5f27" containerName="keystone-bootstrap" Jan 27 19:00:29 crc kubenswrapper[4853]: E0127 19:00:29.525810 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="init" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.525873 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="init" Jan 27 19:00:29 crc kubenswrapper[4853]: E0127 19:00:29.525951 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae89dc3-4a08-42bd-a234-b5e8f948dc23" containerName="barbican-db-sync" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.526018 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae89dc3-4a08-42bd-a234-b5e8f948dc23" containerName="barbican-db-sync" Jan 27 19:00:29 crc kubenswrapper[4853]: E0127 19:00:29.526107 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="dnsmasq-dns" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.526212 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="dnsmasq-dns" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.526567 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae89dc3-4a08-42bd-a234-b5e8f948dc23" containerName="barbican-db-sync" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.526691 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" containerName="dnsmasq-dns" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.526771 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0dddcf5-0747-4132-b14f-f67160ca5f27" containerName="keystone-bootstrap" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.527648 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.533833 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.534056 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kgs4m" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.534205 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.534330 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.534508 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.534335 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.594032 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54f5975d7b-jvtmz"] Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.653347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-fernet-keys\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.665771 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-credential-keys\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.666093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/b033907b-77e1-47e8-8921-6cb6e40f5f06-kube-api-access-8tqjr\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.679622 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-scripts\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.679713 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-combined-ca-bundle\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.679739 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-internal-tls-certs\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.679767 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-config-data\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.680090 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-public-tls-certs\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.781961 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-fernet-keys\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782024 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-credential-keys\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782049 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/b033907b-77e1-47e8-8921-6cb6e40f5f06-kube-api-access-8tqjr\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782098 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-scripts\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782211 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-internal-tls-certs\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782236 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-combined-ca-bundle\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782254 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-config-data\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.782294 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-public-tls-certs\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.787193 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-combined-ca-bundle\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.790288 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c5749dd6f-h76dt"] Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.791899 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.792227 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-credential-keys\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.792379 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-scripts\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.795393 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.801338 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jl89l" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.806935 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5866f968-d652z" event={"ID":"81ce3654-e156-4fa9-9399-3824ff16a228","Type":"ContainerStarted","Data":"62ae5c8a113bdbe11f623eab9c3733e8de6267c17303fe5f7102dc6899c63648"} Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.806993 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5866f968-d652z" event={"ID":"81ce3654-e156-4fa9-9399-3824ff16a228","Type":"ContainerStarted","Data":"2d4230e682f398101a8c9a3f8782c058817b021d601c4997db89375214034350"} Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.807002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5866f968-d652z" event={"ID":"81ce3654-e156-4fa9-9399-3824ff16a228","Type":"ContainerStarted","Data":"1871ef9c0d73c26d5a70aca7d164c1d4e6f9f359b0c6ab157dcb31ad80d3c223"} Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.807988 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.808026 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.808446 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.808641 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-config-data\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.808805 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-internal-tls-certs\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.809618 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-fernet-keys\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.811935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b033907b-77e1-47e8-8921-6cb6e40f5f06-public-tls-certs\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.814853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" event={"ID":"6d63a071-50d0-4387-a817-9d65506ac62b","Type":"ContainerDied","Data":"b2bf4edb243956411b36cace855f295752e45d2d34e9de3674924a640baa6f0c"} Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.814911 4853 scope.go:117] "RemoveContainer" containerID="afc470f3509df01b0e202009191a6d8823df4dba071a2a16b5568dfa46544a66" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.815047 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-snr4r" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.842512 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rfjdk" event={"ID":"b1d33900-476d-4c86-a501-4490c01000ca","Type":"ContainerStarted","Data":"42c545f9f78b908ce08838b23cd42d672651aed1a85a7c0cb36a4907f5cc18d2"} Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.887394 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/b033907b-77e1-47e8-8921-6cb6e40f5f06-kube-api-access-8tqjr\") pod \"keystone-54f5975d7b-jvtmz\" (UID: \"b033907b-77e1-47e8-8921-6cb6e40f5f06\") " pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.887832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data-custom\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.887879 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7m87\" (UniqueName: \"kubernetes.io/projected/127380a8-99a3-455d-becd-78835af33867-kube-api-access-q7m87\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.887977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127380a8-99a3-455d-becd-78835af33867-logs\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.888004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-combined-ca-bundle\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.888083 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.894512 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c5749dd6f-h76dt"] Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.905534 4853 scope.go:117] "RemoveContainer" containerID="ab90fa3dd5e59b14892093256632cf9e0ed63f6454fc14368dfa96c81e3892e4" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.925244 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5b468778c8-dwvbl"] Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.927670 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:29 crc kubenswrapper[4853]: I0127 19:00:29.939778 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023626 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7m87\" (UniqueName: \"kubernetes.io/projected/127380a8-99a3-455d-becd-78835af33867-kube-api-access-q7m87\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023738 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023768 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfkl\" (UniqueName: \"kubernetes.io/projected/ba985127-2044-4a64-af56-ac3452f6f939-kube-api-access-8nfkl\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023806 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127380a8-99a3-455d-becd-78835af33867-logs\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-combined-ca-bundle\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023888 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba985127-2044-4a64-af56-ac3452f6f939-logs\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023911 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-combined-ca-bundle\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023936 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.023963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data-custom\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.024016 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data-custom\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.030632 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127380a8-99a3-455d-becd-78835af33867-logs\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.033241 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b468778c8-dwvbl"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.034411 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rfjdk" podStartSLOduration=4.701155644 podStartE2EDuration="54.034384188s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:38.766721698 +0000 UTC m=+1021.229264581" lastFinishedPulling="2026-01-27 19:00:28.099950242 +0000 UTC m=+1070.562493125" observedRunningTime="2026-01-27 19:00:29.966967582 +0000 UTC m=+1072.429510465" watchObservedRunningTime="2026-01-27 19:00:30.034384188 +0000 UTC m=+1072.496927071" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.038424 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data-custom\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.039043 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-combined-ca-bundle\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.059645 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7m87\" (UniqueName: \"kubernetes.io/projected/127380a8-99a3-455d-becd-78835af33867-kube-api-access-q7m87\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.062821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data\") pod \"barbican-worker-c5749dd6f-h76dt\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.086925 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-b9mgv"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.088940 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126601 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126681 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-config\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126702 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mbw\" (UniqueName: \"kubernetes.io/projected/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-kube-api-access-46mbw\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126806 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfkl\" (UniqueName: \"kubernetes.io/projected/ba985127-2044-4a64-af56-ac3452f6f939-kube-api-access-8nfkl\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126867 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-svc\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126920 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba985127-2044-4a64-af56-ac3452f6f939-logs\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126942 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-combined-ca-bundle\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.126976 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data-custom\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.130196 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f5866f968-d652z" podStartSLOduration=6.13017156 podStartE2EDuration="6.13017156s" podCreationTimestamp="2026-01-27 19:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:30.051553036 +0000 UTC m=+1072.514095919" watchObservedRunningTime="2026-01-27 19:00:30.13017156 +0000 UTC m=+1072.592714443" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.137370 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data-custom\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.137689 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba985127-2044-4a64-af56-ac3452f6f939-logs\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.158549 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.178888 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfkl\" (UniqueName: \"kubernetes.io/projected/ba985127-2044-4a64-af56-ac3452f6f939-kube-api-access-8nfkl\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.206154 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-757c6cc6c8-b7v22"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.208186 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-b9mgv"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.208282 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.229163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.229238 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-svc\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.229265 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.229361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.229407 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-config\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.229426 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mbw\" (UniqueName: \"kubernetes.io/projected/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-kube-api-access-46mbw\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.234763 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-config\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.255827 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.257814 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-757c6cc6c8-b7v22"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.262015 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.281809 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-snr4r"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.295834 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-combined-ca-bundle\") pod \"barbican-keystone-listener-5b468778c8-dwvbl\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.303286 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-svc\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.304045 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mbw\" (UniqueName: \"kubernetes.io/projected/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-kube-api-access-46mbw\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.306311 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.320142 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-snr4r"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.331675 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-combined-ca-bundle\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.333535 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-config-data-custom\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.333792 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m548m\" (UniqueName: \"kubernetes.io/projected/d2f2b676-e83e-4107-9cce-525426cd6cbc-kube-api-access-m548m\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.334037 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2f2b676-e83e-4107-9cce-525426cd6cbc-logs\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.334077 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-config-data\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.343434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.347103 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-b9mgv\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.361014 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d5dd7f58c-gdxtv"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.362999 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.414493 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5dd7f58c-gdxtv"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.440723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-config-data-custom\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.440859 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-config-data-custom\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.440943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m548m\" (UniqueName: \"kubernetes.io/projected/d2f2b676-e83e-4107-9cce-525426cd6cbc-kube-api-access-m548m\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.440975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-combined-ca-bundle\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.441017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d47c94df-0d90-409c-8bd4-2a237d641021-logs\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.441035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9577\" (UniqueName: \"kubernetes.io/projected/d47c94df-0d90-409c-8bd4-2a237d641021-kube-api-access-s9577\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.441133 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2f2b676-e83e-4107-9cce-525426cd6cbc-logs\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.441160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-config-data\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.441188 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-config-data\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.441232 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-combined-ca-bundle\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.445866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2f2b676-e83e-4107-9cce-525426cd6cbc-logs\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.455107 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-config-data\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.455787 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-combined-ca-bundle\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.456423 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f2b676-e83e-4107-9cce-525426cd6cbc-config-data-custom\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.457110 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-546564f86b-jnwdt"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.466345 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.475283 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-546564f86b-jnwdt"] Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.479609 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m548m\" (UniqueName: \"kubernetes.io/projected/d2f2b676-e83e-4107-9cce-525426cd6cbc-kube-api-access-m548m\") pod \"barbican-keystone-listener-757c6cc6c8-b7v22\" (UID: \"d2f2b676-e83e-4107-9cce-525426cd6cbc\") " pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.480173 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-config-data\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557608 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557644 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data-custom\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557671 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-config-data-custom\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8k4\" (UniqueName: \"kubernetes.io/projected/8f37e546-2a7e-49b7-9a9c-0191a746c289-kube-api-access-wc8k4\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557764 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-combined-ca-bundle\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557798 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d47c94df-0d90-409c-8bd4-2a237d641021-logs\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557825 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9577\" (UniqueName: \"kubernetes.io/projected/d47c94df-0d90-409c-8bd4-2a237d641021-kube-api-access-s9577\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557862 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f37e546-2a7e-49b7-9a9c-0191a746c289-logs\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.557902 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-combined-ca-bundle\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.558637 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d47c94df-0d90-409c-8bd4-2a237d641021-logs\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.575875 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.585944 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-combined-ca-bundle\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.587331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-config-data\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.601267 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d47c94df-0d90-409c-8bd4-2a237d641021-config-data-custom\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.615614 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.617049 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.617204 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.618417 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9577\" (UniqueName: \"kubernetes.io/projected/d47c94df-0d90-409c-8bd4-2a237d641021-kube-api-access-s9577\") pod \"barbican-worker-7d5dd7f58c-gdxtv\" (UID: \"d47c94df-0d90-409c-8bd4-2a237d641021\") " pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.639851 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.659566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f37e546-2a7e-49b7-9a9c-0191a746c289-logs\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.660083 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-combined-ca-bundle\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.660397 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.660509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data-custom\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.660637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8k4\" (UniqueName: \"kubernetes.io/projected/8f37e546-2a7e-49b7-9a9c-0191a746c289-kube-api-access-wc8k4\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.662105 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f37e546-2a7e-49b7-9a9c-0191a746c289-logs\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.681231 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.700787 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data-custom\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.711827 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8k4\" (UniqueName: \"kubernetes.io/projected/8f37e546-2a7e-49b7-9a9c-0191a746c289-kube-api-access-wc8k4\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.715016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-combined-ca-bundle\") pod \"barbican-api-546564f86b-jnwdt\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.731324 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.746216 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" Jan 27 19:00:30 crc kubenswrapper[4853]: I0127 19:00:30.855692 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.097702 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c5749dd6f-h76dt"] Jan 27 19:00:31 crc kubenswrapper[4853]: W0127 19:00:31.225165 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127380a8_99a3_455d_becd_78835af33867.slice/crio-e1e14b7a13c9ae54b1dcd8dea1b88bb5a6b3c79f7b49af97ab425c77a27e1a94 WatchSource:0}: Error finding container e1e14b7a13c9ae54b1dcd8dea1b88bb5a6b3c79f7b49af97ab425c77a27e1a94: Status 404 returned error can't find the container with id e1e14b7a13c9ae54b1dcd8dea1b88bb5a6b3c79f7b49af97ab425c77a27e1a94 Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.243865 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54f5975d7b-jvtmz"] Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.413808 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.413926 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.632745 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.744985 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-757c6cc6c8-b7v22"] Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.775679 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5dd7f58c-gdxtv"] Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.791820 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b468778c8-dwvbl"] Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.942336 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-546564f86b-jnwdt"] Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.963917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c5749dd6f-h76dt" event={"ID":"127380a8-99a3-455d-becd-78835af33867","Type":"ContainerStarted","Data":"e1e14b7a13c9ae54b1dcd8dea1b88bb5a6b3c79f7b49af97ab425c77a27e1a94"} Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.983381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f5975d7b-jvtmz" event={"ID":"b033907b-77e1-47e8-8921-6cb6e40f5f06","Type":"ContainerStarted","Data":"8e7329fa70b8d4edaa4a6d299db3c87e49052ac04c399ff6fa8c5eb5dc889fe4"} Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.983432 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f5975d7b-jvtmz" event={"ID":"b033907b-77e1-47e8-8921-6cb6e40f5f06","Type":"ContainerStarted","Data":"3fb61bae2e6e1a3d52cbfca37f2d13d760abe68657b993c65fc8fe804395c346"} Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.983654 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:00:31 crc kubenswrapper[4853]: I0127 19:00:31.997748 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" event={"ID":"ba985127-2044-4a64-af56-ac3452f6f939","Type":"ContainerStarted","Data":"ca0f841ed664e931932409058e19c7829708b488a70ae46cc658493542b4d1df"} Jan 27 19:00:32 crc kubenswrapper[4853]: I0127 19:00:32.020315 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54f5975d7b-jvtmz" podStartSLOduration=3.020295219 podStartE2EDuration="3.020295219s" podCreationTimestamp="2026-01-27 19:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:32.011951121 +0000 UTC m=+1074.474494004" watchObservedRunningTime="2026-01-27 19:00:32.020295219 +0000 UTC m=+1074.482838102" Jan 27 19:00:32 crc kubenswrapper[4853]: I0127 19:00:32.029250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" event={"ID":"d47c94df-0d90-409c-8bd4-2a237d641021","Type":"ContainerStarted","Data":"004fb0a746348d000657bfbd9c0da901bd06fd8ff955850ae02737a922031473"} Jan 27 19:00:32 crc kubenswrapper[4853]: I0127 19:00:32.037797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" event={"ID":"d2f2b676-e83e-4107-9cce-525426cd6cbc","Type":"ContainerStarted","Data":"c6dd76afd0c1d507cbaeb399730cea8cfba06dd17f09a83723ab9ebe4b8b3897"} Jan 27 19:00:32 crc kubenswrapper[4853]: I0127 19:00:32.165382 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d63a071-50d0-4387-a817-9d65506ac62b" path="/var/lib/kubelet/pods/6d63a071-50d0-4387-a817-9d65506ac62b/volumes" Jan 27 19:00:32 crc kubenswrapper[4853]: I0127 19:00:32.166350 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-b9mgv"] Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.113666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" event={"ID":"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0","Type":"ContainerStarted","Data":"8a6ef8bdee7def587ba03d38f8b5b894e3c39313a6c11751e423fa57f420bff4"} Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.122917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-546564f86b-jnwdt" event={"ID":"8f37e546-2a7e-49b7-9a9c-0191a746c289","Type":"ContainerStarted","Data":"d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074"} Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.122990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-546564f86b-jnwdt" event={"ID":"8f37e546-2a7e-49b7-9a9c-0191a746c289","Type":"ContainerStarted","Data":"06da8aafdb1c2bce7ab0d1fc314eedc6876fba629ede418fcee08847ad135c52"} Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.543851 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57cbf989c8-gmwvx"] Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.549502 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.553965 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.554154 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.579264 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57cbf989c8-gmwvx"] Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-public-tls-certs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728316 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fsc\" (UniqueName: \"kubernetes.io/projected/10b90707-26fd-41f4-b020-0458facda8ba-kube-api-access-c2fsc\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728343 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-internal-tls-certs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728394 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-combined-ca-bundle\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-config-data-custom\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728462 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b90707-26fd-41f4-b020-0458facda8ba-logs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.728489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-config-data\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-public-tls-certs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841571 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fsc\" (UniqueName: \"kubernetes.io/projected/10b90707-26fd-41f4-b020-0458facda8ba-kube-api-access-c2fsc\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841591 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-internal-tls-certs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841639 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-combined-ca-bundle\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-config-data-custom\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841715 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b90707-26fd-41f4-b020-0458facda8ba-logs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.841745 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-config-data\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.844441 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b90707-26fd-41f4-b020-0458facda8ba-logs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.851638 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-config-data-custom\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.853791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-public-tls-certs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.874409 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-internal-tls-certs\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.890241 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fsc\" (UniqueName: \"kubernetes.io/projected/10b90707-26fd-41f4-b020-0458facda8ba-kube-api-access-c2fsc\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.890728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-config-data\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.890776 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10b90707-26fd-41f4-b020-0458facda8ba-combined-ca-bundle\") pod \"barbican-api-57cbf989c8-gmwvx\" (UID: \"10b90707-26fd-41f4-b020-0458facda8ba\") " pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:33 crc kubenswrapper[4853]: I0127 19:00:33.921022 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.175573 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-546564f86b-jnwdt" event={"ID":"8f37e546-2a7e-49b7-9a9c-0191a746c289","Type":"ContainerStarted","Data":"eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861"} Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.177080 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.177127 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.180805 4853 generic.go:334] "Generic (PLEG): container finished" podID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerID="b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb" exitCode=0 Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.180853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" event={"ID":"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0","Type":"ContainerDied","Data":"b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb"} Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.206598 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-546564f86b-jnwdt" podStartSLOduration=4.206580191 podStartE2EDuration="4.206580191s" podCreationTimestamp="2026-01-27 19:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:34.203065352 +0000 UTC m=+1076.665608235" watchObservedRunningTime="2026-01-27 19:00:34.206580191 +0000 UTC m=+1076.669123074" Jan 27 19:00:34 crc kubenswrapper[4853]: I0127 19:00:34.700873 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57cbf989c8-gmwvx"] Jan 27 19:00:35 crc kubenswrapper[4853]: I0127 19:00:35.196848 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" event={"ID":"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0","Type":"ContainerStarted","Data":"ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a"} Jan 27 19:00:35 crc kubenswrapper[4853]: I0127 19:00:35.197402 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:35 crc kubenswrapper[4853]: I0127 19:00:35.227886 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" podStartSLOduration=6.227853551 podStartE2EDuration="6.227853551s" podCreationTimestamp="2026-01-27 19:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:35.222112008 +0000 UTC m=+1077.684654891" watchObservedRunningTime="2026-01-27 19:00:35.227853551 +0000 UTC m=+1077.690396434" Jan 27 19:00:35 crc kubenswrapper[4853]: W0127 19:00:35.987007 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b90707_26fd_41f4_b020_0458facda8ba.slice/crio-c4d3a45adecc8c92864bb5cd905ff649101feb503dc519c375ed9391e3d5e407 WatchSource:0}: Error finding container c4d3a45adecc8c92864bb5cd905ff649101feb503dc519c375ed9391e3d5e407: Status 404 returned error can't find the container with id c4d3a45adecc8c92864bb5cd905ff649101feb503dc519c375ed9391e3d5e407 Jan 27 19:00:36 crc kubenswrapper[4853]: I0127 19:00:36.235630 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cbf989c8-gmwvx" event={"ID":"10b90707-26fd-41f4-b020-0458facda8ba","Type":"ContainerStarted","Data":"c4d3a45adecc8c92864bb5cd905ff649101feb503dc519c375ed9391e3d5e407"} Jan 27 19:00:36 crc kubenswrapper[4853]: I0127 19:00:36.336391 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:00:36 crc kubenswrapper[4853]: I0127 19:00:36.640543 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69967664fb-pbqhr" podUID="66d621f7-387b-470d-8e42-bebbfada3bbc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.270263 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" event={"ID":"ba985127-2044-4a64-af56-ac3452f6f939","Type":"ContainerStarted","Data":"1ec240d75c96676d139c74b3fd2c2e73e336a4a534e5a639bb6b4a2f59c9ba2e"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.279286 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cbf989c8-gmwvx" event={"ID":"10b90707-26fd-41f4-b020-0458facda8ba","Type":"ContainerStarted","Data":"d40f6ca0cd1e16eed9b32dbd62dacd4ded7808aff26a33a8cbfae2c419396f28"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.279360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cbf989c8-gmwvx" event={"ID":"10b90707-26fd-41f4-b020-0458facda8ba","Type":"ContainerStarted","Data":"b18380538d7851afbe6ae91db713ae71d242c673fb92bb343a1c2b4e64cdee4c"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.280465 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.280492 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.311229 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" event={"ID":"d47c94df-0d90-409c-8bd4-2a237d641021","Type":"ContainerStarted","Data":"f05d27a5b0ce468930e8a26c8debbdcd4c30a138ae7240d5ae19efec1f5e68af"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.311500 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" event={"ID":"d47c94df-0d90-409c-8bd4-2a237d641021","Type":"ContainerStarted","Data":"7e236bdf869bf75baf976283f9f5a35221dd4eadf50e3a52e066b415c3c49cae"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.315685 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57cbf989c8-gmwvx" podStartSLOduration=4.315659156 podStartE2EDuration="4.315659156s" podCreationTimestamp="2026-01-27 19:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:37.310618833 +0000 UTC m=+1079.773161706" watchObservedRunningTime="2026-01-27 19:00:37.315659156 +0000 UTC m=+1079.778202029" Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.329752 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" event={"ID":"d2f2b676-e83e-4107-9cce-525426cd6cbc","Type":"ContainerStarted","Data":"fe3b20bce62f0611343b247b1a3999be32b2cca8dcb055a0c76e7650bf662c38"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.335052 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d5dd7f58c-gdxtv" podStartSLOduration=2.980568775 podStartE2EDuration="7.335030056s" podCreationTimestamp="2026-01-27 19:00:30 +0000 UTC" firstStartedPulling="2026-01-27 19:00:31.861068654 +0000 UTC m=+1074.323611537" lastFinishedPulling="2026-01-27 19:00:36.215529935 +0000 UTC m=+1078.678072818" observedRunningTime="2026-01-27 19:00:37.334566533 +0000 UTC m=+1079.797109416" watchObservedRunningTime="2026-01-27 19:00:37.335030056 +0000 UTC m=+1079.797572939" Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.348397 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c5749dd6f-h76dt" event={"ID":"127380a8-99a3-455d-becd-78835af33867","Type":"ContainerStarted","Data":"b546be7bfa04ae140e1c6965f057b5ccf9d1ec0aa43d664b20e729d36dbc2e38"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.348477 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c5749dd6f-h76dt" event={"ID":"127380a8-99a3-455d-becd-78835af33867","Type":"ContainerStarted","Data":"258ff3b72c97e030429a1be43134b5a222797dae0fe86149e62c2f6274d32b65"} Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.382491 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" podStartSLOduration=2.957353775 podStartE2EDuration="7.382463054s" podCreationTimestamp="2026-01-27 19:00:30 +0000 UTC" firstStartedPulling="2026-01-27 19:00:31.796447078 +0000 UTC m=+1074.258989961" lastFinishedPulling="2026-01-27 19:00:36.221556357 +0000 UTC m=+1078.684099240" observedRunningTime="2026-01-27 19:00:37.353771129 +0000 UTC m=+1079.816314012" watchObservedRunningTime="2026-01-27 19:00:37.382463054 +0000 UTC m=+1079.845005937" Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.409451 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c5749dd6f-h76dt"] Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.432010 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5b468778c8-dwvbl"] Jan 27 19:00:37 crc kubenswrapper[4853]: I0127 19:00:37.446501 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c5749dd6f-h76dt" podStartSLOduration=3.53556243 podStartE2EDuration="8.446480493s" podCreationTimestamp="2026-01-27 19:00:29 +0000 UTC" firstStartedPulling="2026-01-27 19:00:31.301863224 +0000 UTC m=+1073.764406107" lastFinishedPulling="2026-01-27 19:00:36.212781287 +0000 UTC m=+1078.675324170" observedRunningTime="2026-01-27 19:00:37.398401157 +0000 UTC m=+1079.860944040" watchObservedRunningTime="2026-01-27 19:00:37.446480493 +0000 UTC m=+1079.909023376" Jan 27 19:00:38 crc kubenswrapper[4853]: I0127 19:00:38.367996 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" event={"ID":"ba985127-2044-4a64-af56-ac3452f6f939","Type":"ContainerStarted","Data":"d581c2cf187447e442242d897c2c7b42867b6ccebaac702becb297eb08c60dd9"} Jan 27 19:00:38 crc kubenswrapper[4853]: I0127 19:00:38.375149 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-757c6cc6c8-b7v22" event={"ID":"d2f2b676-e83e-4107-9cce-525426cd6cbc","Type":"ContainerStarted","Data":"b5620492ac7fbc57798eb64b68dd330740e4b263fd4b7068b4f4229763bdef51"} Jan 27 19:00:38 crc kubenswrapper[4853]: I0127 19:00:38.396211 4853 generic.go:334] "Generic (PLEG): container finished" podID="b1d33900-476d-4c86-a501-4490c01000ca" containerID="42c545f9f78b908ce08838b23cd42d672651aed1a85a7c0cb36a4907f5cc18d2" exitCode=0 Jan 27 19:00:38 crc kubenswrapper[4853]: I0127 19:00:38.396489 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rfjdk" event={"ID":"b1d33900-476d-4c86-a501-4490c01000ca","Type":"ContainerDied","Data":"42c545f9f78b908ce08838b23cd42d672651aed1a85a7c0cb36a4907f5cc18d2"} Jan 27 19:00:38 crc kubenswrapper[4853]: I0127 19:00:38.399448 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" podStartSLOduration=5.046231377 podStartE2EDuration="9.399423272s" podCreationTimestamp="2026-01-27 19:00:29 +0000 UTC" firstStartedPulling="2026-01-27 19:00:31.860820657 +0000 UTC m=+1074.323363540" lastFinishedPulling="2026-01-27 19:00:36.214012552 +0000 UTC m=+1078.676555435" observedRunningTime="2026-01-27 19:00:38.396640663 +0000 UTC m=+1080.859183546" watchObservedRunningTime="2026-01-27 19:00:38.399423272 +0000 UTC m=+1080.861966155" Jan 27 19:00:39 crc kubenswrapper[4853]: I0127 19:00:39.406400 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c5749dd6f-h76dt" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker-log" containerID="cri-o://258ff3b72c97e030429a1be43134b5a222797dae0fe86149e62c2f6274d32b65" gracePeriod=30 Jan 27 19:00:39 crc kubenswrapper[4853]: I0127 19:00:39.407019 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c5749dd6f-h76dt" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker" containerID="cri-o://b546be7bfa04ae140e1c6965f057b5ccf9d1ec0aa43d664b20e729d36dbc2e38" gracePeriod=30 Jan 27 19:00:39 crc kubenswrapper[4853]: I0127 19:00:39.406855 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener-log" containerID="cri-o://1ec240d75c96676d139c74b3fd2c2e73e336a4a534e5a639bb6b4a2f59c9ba2e" gracePeriod=30 Jan 27 19:00:39 crc kubenswrapper[4853]: I0127 19:00:39.407066 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener" containerID="cri-o://d581c2cf187447e442242d897c2c7b42867b6ccebaac702becb297eb08c60dd9" gracePeriod=30 Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.430787 4853 generic.go:334] "Generic (PLEG): container finished" podID="127380a8-99a3-455d-becd-78835af33867" containerID="b546be7bfa04ae140e1c6965f057b5ccf9d1ec0aa43d664b20e729d36dbc2e38" exitCode=0 Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.431257 4853 generic.go:334] "Generic (PLEG): container finished" podID="127380a8-99a3-455d-becd-78835af33867" containerID="258ff3b72c97e030429a1be43134b5a222797dae0fe86149e62c2f6274d32b65" exitCode=143 Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.431314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c5749dd6f-h76dt" event={"ID":"127380a8-99a3-455d-becd-78835af33867","Type":"ContainerDied","Data":"b546be7bfa04ae140e1c6965f057b5ccf9d1ec0aa43d664b20e729d36dbc2e38"} Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.432375 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c5749dd6f-h76dt" event={"ID":"127380a8-99a3-455d-becd-78835af33867","Type":"ContainerDied","Data":"258ff3b72c97e030429a1be43134b5a222797dae0fe86149e62c2f6274d32b65"} Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.435424 4853 generic.go:334] "Generic (PLEG): container finished" podID="ba985127-2044-4a64-af56-ac3452f6f939" containerID="d581c2cf187447e442242d897c2c7b42867b6ccebaac702becb297eb08c60dd9" exitCode=0 Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.435452 4853 generic.go:334] "Generic (PLEG): container finished" podID="ba985127-2044-4a64-af56-ac3452f6f939" containerID="1ec240d75c96676d139c74b3fd2c2e73e336a4a534e5a639bb6b4a2f59c9ba2e" exitCode=143 Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.435450 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" event={"ID":"ba985127-2044-4a64-af56-ac3452f6f939","Type":"ContainerDied","Data":"d581c2cf187447e442242d897c2c7b42867b6ccebaac702becb297eb08c60dd9"} Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.435484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" event={"ID":"ba985127-2044-4a64-af56-ac3452f6f939","Type":"ContainerDied","Data":"1ec240d75c96676d139c74b3fd2c2e73e336a4a534e5a639bb6b4a2f59c9ba2e"} Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.619254 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.732841 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-82msw"] Jan 27 19:00:40 crc kubenswrapper[4853]: I0127 19:00:40.734314 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-82msw" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="dnsmasq-dns" containerID="cri-o://b536d9286a81735ddefe0c7015afc49f34eae1789f194627a8a052ba184b8bd1" gracePeriod=10 Jan 27 19:00:41 crc kubenswrapper[4853]: I0127 19:00:41.453542 4853 generic.go:334] "Generic (PLEG): container finished" podID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerID="b536d9286a81735ddefe0c7015afc49f34eae1789f194627a8a052ba184b8bd1" exitCode=0 Jan 27 19:00:41 crc kubenswrapper[4853]: I0127 19:00:41.454035 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-82msw" event={"ID":"07e83bfd-c7f7-4795-9ae3-81a358092c4e","Type":"ContainerDied","Data":"b536d9286a81735ddefe0c7015afc49f34eae1789f194627a8a052ba184b8bd1"} Jan 27 19:00:42 crc kubenswrapper[4853]: I0127 19:00:42.934786 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.414663 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.494246 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-82msw" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.559468 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.842231 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54879c9777-tvw4r"] Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.842495 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54879c9777-tvw4r" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-api" containerID="cri-o://8b2023bd66ff9349a1b712e8efedede50d982e90a208c380d91ce33701b15ba0" gracePeriod=30 Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.843236 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54879c9777-tvw4r" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-httpd" containerID="cri-o://aa91b36a604ab59007d1904b22f3ee35da626ca5979dec847a44d9bd89b48c9a" gracePeriod=30 Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.870913 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54879c9777-tvw4r" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": EOF" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.888713 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64c8bd57d9-g88k8"] Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.892738 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.902848 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64c8bd57d9-g88k8"] Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967345 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-config\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dv7\" (UniqueName: \"kubernetes.io/projected/911dc005-42f8-4086-9ee9-04490f7120f4-kube-api-access-k8dv7\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967452 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-httpd-config\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967508 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-internal-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967572 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-ovndb-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967592 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-public-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:43 crc kubenswrapper[4853]: I0127 19:00:43.967610 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-combined-ca-bundle\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071190 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-config\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dv7\" (UniqueName: \"kubernetes.io/projected/911dc005-42f8-4086-9ee9-04490f7120f4-kube-api-access-k8dv7\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-httpd-config\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071406 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-internal-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071484 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-ovndb-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071510 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-public-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.071531 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-combined-ca-bundle\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.082460 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-ovndb-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.084109 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-combined-ca-bundle\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.085667 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-config\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.095009 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-public-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.098163 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-internal-tls-certs\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.099012 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dv7\" (UniqueName: \"kubernetes.io/projected/911dc005-42f8-4086-9ee9-04490f7120f4-kube-api-access-k8dv7\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.117266 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/911dc005-42f8-4086-9ee9-04490f7120f4-httpd-config\") pod \"neutron-64c8bd57d9-g88k8\" (UID: \"911dc005-42f8-4086-9ee9-04490f7120f4\") " pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.235043 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.518679 4853 generic.go:334] "Generic (PLEG): container finished" podID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerID="aa91b36a604ab59007d1904b22f3ee35da626ca5979dec847a44d9bd89b48c9a" exitCode=0 Jan 27 19:00:44 crc kubenswrapper[4853]: I0127 19:00:44.518739 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54879c9777-tvw4r" event={"ID":"cd236b6c-6a86-4c6a-8e4a-f2a459943780","Type":"ContainerDied","Data":"aa91b36a604ab59007d1904b22f3ee35da626ca5979dec847a44d9bd89b48c9a"} Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.291756 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.335317 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.335445 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.336795 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5eb06c79644ed85292d51f529bc88f05f3d36c0c73a7d7ccd7b435ebbe58e251"} pod="openstack/horizon-c78c8d4f6-bchzm" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.336867 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" containerID="cri-o://5eb06c79644ed85292d51f529bc88f05f3d36c0c73a7d7ccd7b435ebbe58e251" gracePeriod=30 Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.361196 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54879c9777-tvw4r" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.421862 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57cbf989c8-gmwvx" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.543307 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-546564f86b-jnwdt"] Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.543610 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-546564f86b-jnwdt" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api-log" containerID="cri-o://d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074" gracePeriod=30 Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.544045 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-546564f86b-jnwdt" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api" containerID="cri-o://eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861" gracePeriod=30 Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.560998 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-546564f86b-jnwdt" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.594586 4853 generic.go:334] "Generic (PLEG): container finished" podID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerID="8b2023bd66ff9349a1b712e8efedede50d982e90a208c380d91ce33701b15ba0" exitCode=0 Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.594675 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54879c9777-tvw4r" event={"ID":"cd236b6c-6a86-4c6a-8e4a-f2a459943780","Type":"ContainerDied","Data":"8b2023bd66ff9349a1b712e8efedede50d982e90a208c380d91ce33701b15ba0"} Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.598142 4853 generic.go:334] "Generic (PLEG): container finished" podID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerID="7b9e25dd8951aba8d4aae2d3a6fcc8cb21e0080a8226a4815f40343ffe16e9da" exitCode=137 Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.598195 4853 generic.go:334] "Generic (PLEG): container finished" podID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerID="70a07f68d1290bc4fdd19f8574adc1330f703630dc442e020d19ba65038dbd43" exitCode=137 Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.598207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbbdbb6d9-g899w" event={"ID":"e556ea12-6992-4aba-be03-e6d4a2823b74","Type":"ContainerDied","Data":"7b9e25dd8951aba8d4aae2d3a6fcc8cb21e0080a8226a4815f40343ffe16e9da"} Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.598261 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbbdbb6d9-g899w" event={"ID":"e556ea12-6992-4aba-be03-e6d4a2823b74","Type":"ContainerDied","Data":"70a07f68d1290bc4fdd19f8574adc1330f703630dc442e020d19ba65038dbd43"} Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.642049 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69967664fb-pbqhr" podUID="66d621f7-387b-470d-8e42-bebbfada3bbc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.642173 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.643317 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3e74acd3091e36c067f9363770b8672147f720192965c341275042fc68c2d916"} pod="openstack/horizon-69967664fb-pbqhr" containerMessage="Container horizon failed startup probe, will be restarted" Jan 27 19:00:46 crc kubenswrapper[4853]: I0127 19:00:46.643383 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69967664fb-pbqhr" podUID="66d621f7-387b-470d-8e42-bebbfada3bbc" containerName="horizon" containerID="cri-o://3e74acd3091e36c067f9363770b8672147f720192965c341275042fc68c2d916" gracePeriod=30 Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.620288 4853 generic.go:334] "Generic (PLEG): container finished" podID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerID="5339ebcd43ec79faa6c17b919a8bb5ed81f10743f2d6a8ba151e39c4224c95d5" exitCode=137 Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.620822 4853 generic.go:334] "Generic (PLEG): container finished" podID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerID="92dff0c5224d3dee5827961bc46f8c7f5d4896f42ebb8d0818620327c2d3f37d" exitCode=137 Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.620343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549f7c8989-k8qw5" event={"ID":"9c93d763-a677-4df8-9846-5fa96f76e0ab","Type":"ContainerDied","Data":"5339ebcd43ec79faa6c17b919a8bb5ed81f10743f2d6a8ba151e39c4224c95d5"} Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.620893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549f7c8989-k8qw5" event={"ID":"9c93d763-a677-4df8-9846-5fa96f76e0ab","Type":"ContainerDied","Data":"92dff0c5224d3dee5827961bc46f8c7f5d4896f42ebb8d0818620327c2d3f37d"} Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.625712 4853 generic.go:334] "Generic (PLEG): container finished" podID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerID="bf6e6ce6ee31833377bad084f3c815870cd11b10798c0f3681f38bf6757a0e13" exitCode=137 Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.625751 4853 generic.go:334] "Generic (PLEG): container finished" podID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerID="d73065e76bd2c3efa6b0c46086cd6fad7f71858b527127f5594a43afa42ab84a" exitCode=137 Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.625780 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8b67f5cc-gmbgv" event={"ID":"b62f23c7-d81a-4925-a2c3-10c410912a0f","Type":"ContainerDied","Data":"bf6e6ce6ee31833377bad084f3c815870cd11b10798c0f3681f38bf6757a0e13"} Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.625828 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8b67f5cc-gmbgv" event={"ID":"b62f23c7-d81a-4925-a2c3-10c410912a0f","Type":"ContainerDied","Data":"d73065e76bd2c3efa6b0c46086cd6fad7f71858b527127f5594a43afa42ab84a"} Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.628443 4853 generic.go:334] "Generic (PLEG): container finished" podID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerID="d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074" exitCode=143 Jan 27 19:00:47 crc kubenswrapper[4853]: I0127 19:00:47.628484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-546564f86b-jnwdt" event={"ID":"8f37e546-2a7e-49b7-9a9c-0191a746c289","Type":"ContainerDied","Data":"d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074"} Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.393364 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rfjdk" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.742281 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-82msw" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.743588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-db-sync-config-data\") pod \"b1d33900-476d-4c86-a501-4490c01000ca\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.743717 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1d33900-476d-4c86-a501-4490c01000ca-etc-machine-id\") pod \"b1d33900-476d-4c86-a501-4490c01000ca\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.743842 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-scripts\") pod \"b1d33900-476d-4c86-a501-4490c01000ca\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.743947 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8l6t\" (UniqueName: \"kubernetes.io/projected/b1d33900-476d-4c86-a501-4490c01000ca-kube-api-access-t8l6t\") pod \"b1d33900-476d-4c86-a501-4490c01000ca\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.744016 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1d33900-476d-4c86-a501-4490c01000ca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b1d33900-476d-4c86-a501-4490c01000ca" (UID: "b1d33900-476d-4c86-a501-4490c01000ca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.744090 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-config-data\") pod \"b1d33900-476d-4c86-a501-4490c01000ca\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.744155 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-combined-ca-bundle\") pod \"b1d33900-476d-4c86-a501-4490c01000ca\" (UID: \"b1d33900-476d-4c86-a501-4490c01000ca\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.744934 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1d33900-476d-4c86-a501-4490c01000ca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.757598 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-scripts" (OuterVolumeSpecName: "scripts") pod "b1d33900-476d-4c86-a501-4490c01000ca" (UID: "b1d33900-476d-4c86-a501-4490c01000ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.760143 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b1d33900-476d-4c86-a501-4490c01000ca" (UID: "b1d33900-476d-4c86-a501-4490c01000ca"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.767179 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d33900-476d-4c86-a501-4490c01000ca-kube-api-access-t8l6t" (OuterVolumeSpecName: "kube-api-access-t8l6t") pod "b1d33900-476d-4c86-a501-4490c01000ca" (UID: "b1d33900-476d-4c86-a501-4490c01000ca"). InnerVolumeSpecName "kube-api-access-t8l6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.783917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rfjdk" event={"ID":"b1d33900-476d-4c86-a501-4490c01000ca","Type":"ContainerDied","Data":"08f659cff19f95e0e6be362f184c26b863f7c22df64a15a03265d516a58926fe"} Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.783998 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f659cff19f95e0e6be362f184c26b863f7c22df64a15a03265d516a58926fe" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.784020 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rfjdk" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.835254 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1d33900-476d-4c86-a501-4490c01000ca" (UID: "b1d33900-476d-4c86-a501-4490c01000ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.836558 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.845894 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba985127-2044-4a64-af56-ac3452f6f939-logs\") pod \"ba985127-2044-4a64-af56-ac3452f6f939\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846036 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data-custom\") pod \"ba985127-2044-4a64-af56-ac3452f6f939\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846075 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nfkl\" (UniqueName: \"kubernetes.io/projected/ba985127-2044-4a64-af56-ac3452f6f939-kube-api-access-8nfkl\") pod \"ba985127-2044-4a64-af56-ac3452f6f939\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846100 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-combined-ca-bundle\") pod \"ba985127-2044-4a64-af56-ac3452f6f939\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846139 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data\") pod \"ba985127-2044-4a64-af56-ac3452f6f939\" (UID: \"ba985127-2044-4a64-af56-ac3452f6f939\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846430 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846442 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846452 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8l6t\" (UniqueName: \"kubernetes.io/projected/b1d33900-476d-4c86-a501-4490c01000ca-kube-api-access-t8l6t\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.846463 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.849909 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba985127-2044-4a64-af56-ac3452f6f939-logs" (OuterVolumeSpecName: "logs") pod "ba985127-2044-4a64-af56-ac3452f6f939" (UID: "ba985127-2044-4a64-af56-ac3452f6f939"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.852851 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ba985127-2044-4a64-af56-ac3452f6f939" (UID: "ba985127-2044-4a64-af56-ac3452f6f939"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.864587 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba985127-2044-4a64-af56-ac3452f6f939-kube-api-access-8nfkl" (OuterVolumeSpecName: "kube-api-access-8nfkl") pod "ba985127-2044-4a64-af56-ac3452f6f939" (UID: "ba985127-2044-4a64-af56-ac3452f6f939"). InnerVolumeSpecName "kube-api-access-8nfkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.892527 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba985127-2044-4a64-af56-ac3452f6f939" (UID: "ba985127-2044-4a64-af56-ac3452f6f939"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.910329 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data" (OuterVolumeSpecName: "config-data") pod "ba985127-2044-4a64-af56-ac3452f6f939" (UID: "ba985127-2044-4a64-af56-ac3452f6f939"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.934416 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.948421 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data-custom\") pod \"127380a8-99a3-455d-becd-78835af33867\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.948497 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127380a8-99a3-455d-becd-78835af33867-logs\") pod \"127380a8-99a3-455d-becd-78835af33867\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.948540 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data\") pod \"127380a8-99a3-455d-becd-78835af33867\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.948610 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7m87\" (UniqueName: \"kubernetes.io/projected/127380a8-99a3-455d-becd-78835af33867-kube-api-access-q7m87\") pod \"127380a8-99a3-455d-becd-78835af33867\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.948634 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-combined-ca-bundle\") pod \"127380a8-99a3-455d-becd-78835af33867\" (UID: \"127380a8-99a3-455d-becd-78835af33867\") " Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.951620 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba985127-2044-4a64-af56-ac3452f6f939-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.951646 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.951660 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nfkl\" (UniqueName: \"kubernetes.io/projected/ba985127-2044-4a64-af56-ac3452f6f939-kube-api-access-8nfkl\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.951670 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.951679 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba985127-2044-4a64-af56-ac3452f6f939-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.964573 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127380a8-99a3-455d-becd-78835af33867-logs" (OuterVolumeSpecName: "logs") pod "127380a8-99a3-455d-becd-78835af33867" (UID: "127380a8-99a3-455d-becd-78835af33867"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:48 crc kubenswrapper[4853]: I0127 19:00:48.985694 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127380a8-99a3-455d-becd-78835af33867-kube-api-access-q7m87" (OuterVolumeSpecName: "kube-api-access-q7m87") pod "127380a8-99a3-455d-becd-78835af33867" (UID: "127380a8-99a3-455d-becd-78835af33867"). InnerVolumeSpecName "kube-api-access-q7m87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.026401 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "127380a8-99a3-455d-becd-78835af33867" (UID: "127380a8-99a3-455d-becd-78835af33867"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.037272 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-config-data" (OuterVolumeSpecName: "config-data") pod "b1d33900-476d-4c86-a501-4490c01000ca" (UID: "b1d33900-476d-4c86-a501-4490c01000ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.054739 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "127380a8-99a3-455d-becd-78835af33867" (UID: "127380a8-99a3-455d-becd-78835af33867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.054911 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7m87\" (UniqueName: \"kubernetes.io/projected/127380a8-99a3-455d-becd-78835af33867-kube-api-access-q7m87\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.054933 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.054946 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d33900-476d-4c86-a501-4490c01000ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.054958 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.058328 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127380a8-99a3-455d-becd-78835af33867-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.111701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.112266 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data" (OuterVolumeSpecName: "config-data") pod "127380a8-99a3-455d-becd-78835af33867" (UID: "127380a8-99a3-455d-becd-78835af33867"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.160756 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127380a8-99a3-455d-becd-78835af33867-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.291824 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.368876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-sb\") pod \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.369063 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-nb\") pod \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.369135 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-swift-storage-0\") pod \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.369239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-svc\") pod \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.369286 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6sng\" (UniqueName: \"kubernetes.io/projected/07e83bfd-c7f7-4795-9ae3-81a358092c4e-kube-api-access-q6sng\") pod \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.369403 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-config\") pod \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\" (UID: \"07e83bfd-c7f7-4795-9ae3-81a358092c4e\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.392383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e83bfd-c7f7-4795-9ae3-81a358092c4e-kube-api-access-q6sng" (OuterVolumeSpecName: "kube-api-access-q6sng") pod "07e83bfd-c7f7-4795-9ae3-81a358092c4e" (UID: "07e83bfd-c7f7-4795-9ae3-81a358092c4e"). InnerVolumeSpecName "kube-api-access-q6sng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.459857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-config" (OuterVolumeSpecName: "config") pod "07e83bfd-c7f7-4795-9ae3-81a358092c4e" (UID: "07e83bfd-c7f7-4795-9ae3-81a358092c4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.470691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "07e83bfd-c7f7-4795-9ae3-81a358092c4e" (UID: "07e83bfd-c7f7-4795-9ae3-81a358092c4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.472677 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.472703 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6sng\" (UniqueName: \"kubernetes.io/projected/07e83bfd-c7f7-4795-9ae3-81a358092c4e-kube-api-access-q6sng\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.472717 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.481290 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07e83bfd-c7f7-4795-9ae3-81a358092c4e" (UID: "07e83bfd-c7f7-4795-9ae3-81a358092c4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.523811 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07e83bfd-c7f7-4795-9ae3-81a358092c4e" (UID: "07e83bfd-c7f7-4795-9ae3-81a358092c4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.541905 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07e83bfd-c7f7-4795-9ae3-81a358092c4e" (UID: "07e83bfd-c7f7-4795-9ae3-81a358092c4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.574898 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.574955 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.574968 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e83bfd-c7f7-4795-9ae3-81a358092c4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.593939 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.606519 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.621195 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.626928 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677072 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-config-data\") pod \"b62f23c7-d81a-4925-a2c3-10c410912a0f\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677202 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bjpg\" (UniqueName: \"kubernetes.io/projected/e556ea12-6992-4aba-be03-e6d4a2823b74-kube-api-access-8bjpg\") pod \"e556ea12-6992-4aba-be03-e6d4a2823b74\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677245 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6p5\" (UniqueName: \"kubernetes.io/projected/9c93d763-a677-4df8-9846-5fa96f76e0ab-kube-api-access-wn6p5\") pod \"9c93d763-a677-4df8-9846-5fa96f76e0ab\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677283 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b62f23c7-d81a-4925-a2c3-10c410912a0f-logs\") pod \"b62f23c7-d81a-4925-a2c3-10c410912a0f\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677331 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c93d763-a677-4df8-9846-5fa96f76e0ab-horizon-secret-key\") pod \"9c93d763-a677-4df8-9846-5fa96f76e0ab\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677399 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c93d763-a677-4df8-9846-5fa96f76e0ab-logs\") pod \"9c93d763-a677-4df8-9846-5fa96f76e0ab\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data\") pod \"9c93d763-a677-4df8-9846-5fa96f76e0ab\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677492 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-scripts\") pod \"b62f23c7-d81a-4925-a2c3-10c410912a0f\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677521 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e556ea12-6992-4aba-be03-e6d4a2823b74-horizon-secret-key\") pod \"e556ea12-6992-4aba-be03-e6d4a2823b74\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-httpd-config\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677626 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc48m\" (UniqueName: \"kubernetes.io/projected/b62f23c7-d81a-4925-a2c3-10c410912a0f-kube-api-access-qc48m\") pod \"b62f23c7-d81a-4925-a2c3-10c410912a0f\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677715 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b62f23c7-d81a-4925-a2c3-10c410912a0f-horizon-secret-key\") pod \"b62f23c7-d81a-4925-a2c3-10c410912a0f\" (UID: \"b62f23c7-d81a-4925-a2c3-10c410912a0f\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677750 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-config\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677774 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5stq\" (UniqueName: \"kubernetes.io/projected/cd236b6c-6a86-4c6a-8e4a-f2a459943780-kube-api-access-p5stq\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677810 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556ea12-6992-4aba-be03-e6d4a2823b74-logs\") pod \"e556ea12-6992-4aba-be03-e6d4a2823b74\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677841 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-scripts\") pod \"9c93d763-a677-4df8-9846-5fa96f76e0ab\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677878 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-combined-ca-bundle\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677907 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-public-tls-certs\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677939 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-ovndb-tls-certs\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.677961 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-internal-tls-certs\") pod \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\" (UID: \"cd236b6c-6a86-4c6a-8e4a-f2a459943780\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.678013 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-scripts\") pod \"e556ea12-6992-4aba-be03-e6d4a2823b74\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.678039 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-config-data\") pod \"e556ea12-6992-4aba-be03-e6d4a2823b74\" (UID: \"e556ea12-6992-4aba-be03-e6d4a2823b74\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.683366 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64c8bd57d9-g88k8"] Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.693273 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b62f23c7-d81a-4925-a2c3-10c410912a0f-logs" (OuterVolumeSpecName: "logs") pod "b62f23c7-d81a-4925-a2c3-10c410912a0f" (UID: "b62f23c7-d81a-4925-a2c3-10c410912a0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.702355 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e556ea12-6992-4aba-be03-e6d4a2823b74-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e556ea12-6992-4aba-be03-e6d4a2823b74" (UID: "e556ea12-6992-4aba-be03-e6d4a2823b74"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.703373 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e556ea12-6992-4aba-be03-e6d4a2823b74-logs" (OuterVolumeSpecName: "logs") pod "e556ea12-6992-4aba-be03-e6d4a2823b74" (UID: "e556ea12-6992-4aba-be03-e6d4a2823b74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.704938 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c93d763-a677-4df8-9846-5fa96f76e0ab-logs" (OuterVolumeSpecName: "logs") pod "9c93d763-a677-4df8-9846-5fa96f76e0ab" (UID: "9c93d763-a677-4df8-9846-5fa96f76e0ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.714985 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e556ea12-6992-4aba-be03-e6d4a2823b74-kube-api-access-8bjpg" (OuterVolumeSpecName: "kube-api-access-8bjpg") pod "e556ea12-6992-4aba-be03-e6d4a2823b74" (UID: "e556ea12-6992-4aba-be03-e6d4a2823b74"). InnerVolumeSpecName "kube-api-access-8bjpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.762977 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.765991 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62f23c7-d81a-4925-a2c3-10c410912a0f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b62f23c7-d81a-4925-a2c3-10c410912a0f" (UID: "b62f23c7-d81a-4925-a2c3-10c410912a0f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.766003 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd236b6c-6a86-4c6a-8e4a-f2a459943780-kube-api-access-p5stq" (OuterVolumeSpecName: "kube-api-access-p5stq") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "kube-api-access-p5stq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.766082 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c93d763-a677-4df8-9846-5fa96f76e0ab-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9c93d763-a677-4df8-9846-5fa96f76e0ab" (UID: "9c93d763-a677-4df8-9846-5fa96f76e0ab"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.766947 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-scripts" (OuterVolumeSpecName: "scripts") pod "b62f23c7-d81a-4925-a2c3-10c410912a0f" (UID: "b62f23c7-d81a-4925-a2c3-10c410912a0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.768024 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62f23c7-d81a-4925-a2c3-10c410912a0f-kube-api-access-qc48m" (OuterVolumeSpecName: "kube-api-access-qc48m") pod "b62f23c7-d81a-4925-a2c3-10c410912a0f" (UID: "b62f23c7-d81a-4925-a2c3-10c410912a0f"). InnerVolumeSpecName "kube-api-access-qc48m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.779783 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data" (OuterVolumeSpecName: "config-data") pod "9c93d763-a677-4df8-9846-5fa96f76e0ab" (UID: "9c93d763-a677-4df8-9846-5fa96f76e0ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.780352 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data\") pod \"9c93d763-a677-4df8-9846-5fa96f76e0ab\" (UID: \"9c93d763-a677-4df8-9846-5fa96f76e0ab\") " Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781133 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b62f23c7-d81a-4925-a2c3-10c410912a0f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781152 4853 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c93d763-a677-4df8-9846-5fa96f76e0ab-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781164 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c93d763-a677-4df8-9846-5fa96f76e0ab-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781176 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781186 4853 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e556ea12-6992-4aba-be03-e6d4a2823b74-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781195 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781206 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc48m\" (UniqueName: \"kubernetes.io/projected/b62f23c7-d81a-4925-a2c3-10c410912a0f-kube-api-access-qc48m\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781216 4853 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b62f23c7-d81a-4925-a2c3-10c410912a0f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781227 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5stq\" (UniqueName: \"kubernetes.io/projected/cd236b6c-6a86-4c6a-8e4a-f2a459943780-kube-api-access-p5stq\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781237 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556ea12-6992-4aba-be03-e6d4a2823b74-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781248 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bjpg\" (UniqueName: \"kubernetes.io/projected/e556ea12-6992-4aba-be03-e6d4a2823b74-kube-api-access-8bjpg\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: W0127 19:00:49.781343 4853 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9c93d763-a677-4df8-9846-5fa96f76e0ab/volumes/kubernetes.io~configmap/config-data Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.781360 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data" (OuterVolumeSpecName: "config-data") pod "9c93d763-a677-4df8-9846-5fa96f76e0ab" (UID: "9c93d763-a677-4df8-9846-5fa96f76e0ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.787216 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c93d763-a677-4df8-9846-5fa96f76e0ab-kube-api-access-wn6p5" (OuterVolumeSpecName: "kube-api-access-wn6p5") pod "9c93d763-a677-4df8-9846-5fa96f76e0ab" (UID: "9c93d763-a677-4df8-9846-5fa96f76e0ab"). InnerVolumeSpecName "kube-api-access-wn6p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.823401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8b67f5cc-gmbgv" event={"ID":"b62f23c7-d81a-4925-a2c3-10c410912a0f","Type":"ContainerDied","Data":"901d9248fb74793e8ef2227c299562487faf838e63445e2e8c0b54a1bb4be7dc"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.823482 4853 scope.go:117] "RemoveContainer" containerID="bf6e6ce6ee31833377bad084f3c815870cd11b10798c0f3681f38bf6757a0e13" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.823665 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8b67f5cc-gmbgv" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.844829 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c5749dd6f-h76dt" event={"ID":"127380a8-99a3-455d-becd-78835af33867","Type":"ContainerDied","Data":"e1e14b7a13c9ae54b1dcd8dea1b88bb5a6b3c79f7b49af97ab425c77a27e1a94"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.845027 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c5749dd6f-h76dt" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.846771 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847394 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847412 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker-log" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847426 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847435 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener-log" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847454 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847459 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847470 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="dnsmasq-dns" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847476 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="dnsmasq-dns" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847489 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d33900-476d-4c86-a501-4490c01000ca" containerName="cinder-db-sync" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847495 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d33900-476d-4c86-a501-4490c01000ca" containerName="cinder-db-sync" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847503 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847509 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847516 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-api" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847522 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-api" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847535 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847541 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847559 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-httpd" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847565 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-httpd" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847575 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847582 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847591 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847597 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847608 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="init" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847614 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="init" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847624 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847630 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847640 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847647 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: E0127 19:00:49.847654 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847668 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847848 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847858 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847869 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d33900-476d-4c86-a501-4490c01000ca" containerName="cinder-db-sync" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847879 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847895 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847902 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847911 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847920 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847931 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" containerName="dnsmasq-dns" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847946 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-httpd" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847955 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" containerName="horizon" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847965 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" containerName="neutron-api" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847976 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="127380a8-99a3-455d-becd-78835af33867" containerName="barbican-worker" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.847988 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba985127-2044-4a64-af56-ac3452f6f939" containerName="barbican-keystone-listener-log" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.850805 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.858990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-549f7c8989-k8qw5" event={"ID":"9c93d763-a677-4df8-9846-5fa96f76e0ab","Type":"ContainerDied","Data":"8948182430b8915801e47adaaac9fb0d858320eaeafbd9d8cf660c91ecbcd575"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.859238 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-549f7c8989-k8qw5" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.861419 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.863076 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.863050 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fjxb2" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.863263 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.863448 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.866347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" event={"ID":"ba985127-2044-4a64-af56-ac3452f6f939","Type":"ContainerDied","Data":"ca0f841ed664e931932409058e19c7829708b488a70ae46cc658493542b4d1df"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.866443 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b468778c8-dwvbl" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.882935 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn6p5\" (UniqueName: \"kubernetes.io/projected/9c93d763-a677-4df8-9846-5fa96f76e0ab-kube-api-access-wn6p5\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.883073 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.894892 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pmx5z"] Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.897581 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.897964 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-config-data" (OuterVolumeSpecName: "config-data") pod "b62f23c7-d81a-4925-a2c3-10c410912a0f" (UID: "b62f23c7-d81a-4925-a2c3-10c410912a0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.908309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-config-data" (OuterVolumeSpecName: "config-data") pod "e556ea12-6992-4aba-be03-e6d4a2823b74" (UID: "e556ea12-6992-4aba-be03-e6d4a2823b74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.912464 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-82msw" event={"ID":"07e83bfd-c7f7-4795-9ae3-81a358092c4e","Type":"ContainerDied","Data":"587eef84ec7d6ca3faccae6a398d572ff90b7f017a84b9ec04824571d58b3722"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.912616 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-82msw" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.921591 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pmx5z"] Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.927374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54879c9777-tvw4r" event={"ID":"cd236b6c-6a86-4c6a-8e4a-f2a459943780","Type":"ContainerDied","Data":"6d2821afd51a901fc78cd2685c2893d8e03f8f6ef75d9a1a92542b733a0954b5"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.927528 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54879c9777-tvw4r" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.930686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerStarted","Data":"c6933b3f54bf3b2cb983bbe27bdf372710fd86945bce99facf0bb5bc8a73734b"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.931064 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.931156 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="proxy-httpd" containerID="cri-o://c6933b3f54bf3b2cb983bbe27bdf372710fd86945bce99facf0bb5bc8a73734b" gracePeriod=30 Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.931202 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="ceilometer-notification-agent" containerID="cri-o://0c718b1ee29e92b02c677d336336d3b0e5835da55281777ff39e4f1a10cd46ef" gracePeriod=30 Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.931296 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="sg-core" containerID="cri-o://07c01fa323e543321032a669c321880305b06ed4ab384d27ea8a800bdbcdc348" gracePeriod=30 Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.942349 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbbdbb6d9-g899w" event={"ID":"e556ea12-6992-4aba-be03-e6d4a2823b74","Type":"ContainerDied","Data":"87003f46385ca18c520ee4054c0e8c556667e16034f4a15d97f739722da61cab"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.942487 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbbdbb6d9-g899w" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.949986 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64c8bd57d9-g88k8" event={"ID":"911dc005-42f8-4086-9ee9-04490f7120f4","Type":"ContainerStarted","Data":"454d2f88c9640b259321d0571722bb54819e5858b9fa3df3ebe8ffd0747dd85a"} Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.973654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986293 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngblg\" (UniqueName: \"kubernetes.io/projected/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-kube-api-access-ngblg\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986373 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986400 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986505 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-config\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986523 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986550 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986765 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986813 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-scripts\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986877 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwpt\" (UniqueName: \"kubernetes.io/projected/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-kube-api-access-xwwpt\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.986954 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.987014 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.987025 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.987035 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b62f23c7-d81a-4925-a2c3-10c410912a0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4853]: I0127 19:00:49.987959 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-scripts" (OuterVolumeSpecName: "scripts") pod "e556ea12-6992-4aba-be03-e6d4a2823b74" (UID: "e556ea12-6992-4aba-be03-e6d4a2823b74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.019369 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.019866 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-scripts" (OuterVolumeSpecName: "scripts") pod "9c93d763-a677-4df8-9846-5fa96f76e0ab" (UID: "9c93d763-a677-4df8-9846-5fa96f76e0ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.036216 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.074977 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-config" (OuterVolumeSpecName: "config") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089132 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngblg\" (UniqueName: \"kubernetes.io/projected/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-kube-api-access-ngblg\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089277 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089322 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089351 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-config\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089859 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.089901 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-scripts\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.090017 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwwpt\" (UniqueName: \"kubernetes.io/projected/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-kube-api-access-xwwpt\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.090189 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.090202 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c93d763-a677-4df8-9846-5fa96f76e0ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.090213 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.090226 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.090239 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e556ea12-6992-4aba-be03-e6d4a2823b74-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.092332 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.095199 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-config\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.095881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.095988 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.096667 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cd236b6c-6a86-4c6a-8e4a-f2a459943780" (UID: "cd236b6c-6a86-4c6a-8e4a-f2a459943780"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.097613 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.102427 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-scripts\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.103487 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.105051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.106227 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.108249 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.108257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.108591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngblg\" (UniqueName: \"kubernetes.io/projected/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-kube-api-access-ngblg\") pod \"cinder-scheduler-0\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.110390 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.111370 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwwpt\" (UniqueName: \"kubernetes.io/projected/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-kube-api-access-xwwpt\") pod \"dnsmasq-dns-5c9776ccc5-pmx5z\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.172780 4853 scope.go:117] "RemoveContainer" containerID="d73065e76bd2c3efa6b0c46086cd6fad7f71858b527127f5594a43afa42ab84a" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.186967 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.191952 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.191999 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-scripts\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.192037 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data-custom\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.192073 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-logs\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.192147 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkx99\" (UniqueName: \"kubernetes.io/projected/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-kube-api-access-rkx99\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.192216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.192245 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.192294 4853 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd236b6c-6a86-4c6a-8e4a-f2a459943780-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294000 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294386 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294457 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-scripts\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data-custom\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294540 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-logs\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.294600 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkx99\" (UniqueName: \"kubernetes.io/projected/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-kube-api-access-rkx99\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.295020 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.295484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-logs\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.300875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data-custom\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.300969 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.301655 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.307874 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-scripts\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.317475 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkx99\" (UniqueName: \"kubernetes.io/projected/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-kube-api-access-rkx99\") pod \"cinder-api-0\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.376648 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.392666 4853 scope.go:117] "RemoveContainer" containerID="b546be7bfa04ae140e1c6965f057b5ccf9d1ec0aa43d664b20e729d36dbc2e38" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.521158 4853 scope.go:117] "RemoveContainer" containerID="258ff3b72c97e030429a1be43134b5a222797dae0fe86149e62c2f6274d32b65" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.567242 4853 scope.go:117] "RemoveContainer" containerID="5339ebcd43ec79faa6c17b919a8bb5ed81f10743f2d6a8ba151e39c4224c95d5" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.629035 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.635678 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.691888 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dbbdbb6d9-g899w"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.758615 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dbbdbb6d9-g899w"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.805252 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-549f7c8989-k8qw5"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.819204 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-549f7c8989-k8qw5"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.827527 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c5749dd6f-h76dt"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.835999 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-c5749dd6f-h76dt"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.850455 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-82msw"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.863289 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-82msw"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.877854 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c8b67f5cc-gmbgv"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.882967 4853 scope.go:117] "RemoveContainer" containerID="92dff0c5224d3dee5827961bc46f8c7f5d4896f42ebb8d0818620327c2d3f37d" Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.900480 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c8b67f5cc-gmbgv"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.911553 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54879c9777-tvw4r"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.924333 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54879c9777-tvw4r"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.940025 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5b468778c8-dwvbl"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.956577 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5b468778c8-dwvbl"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.967796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"54c6dba3-18e8-4a4a-9437-a413c96cbcbb","Type":"ContainerStarted","Data":"a8bfd4b546eb4e4c9c431ec68ca4f73b2578bd7f1a96a9b098ed2e15a61439a8"} Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.970585 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.982628 4853 generic.go:334] "Generic (PLEG): container finished" podID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerID="c6933b3f54bf3b2cb983bbe27bdf372710fd86945bce99facf0bb5bc8a73734b" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.982670 4853 generic.go:334] "Generic (PLEG): container finished" podID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerID="07c01fa323e543321032a669c321880305b06ed4ab384d27ea8a800bdbcdc348" exitCode=2 Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.982680 4853 generic.go:334] "Generic (PLEG): container finished" podID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerID="0c718b1ee29e92b02c677d336336d3b0e5835da55281777ff39e4f1a10cd46ef" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.982735 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerDied","Data":"c6933b3f54bf3b2cb983bbe27bdf372710fd86945bce99facf0bb5bc8a73734b"} Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.982771 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerDied","Data":"07c01fa323e543321032a669c321880305b06ed4ab384d27ea8a800bdbcdc348"} Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.982781 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerDied","Data":"0c718b1ee29e92b02c677d336336d3b0e5835da55281777ff39e4f1a10cd46ef"} Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.985171 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64c8bd57d9-g88k8" event={"ID":"911dc005-42f8-4086-9ee9-04490f7120f4","Type":"ContainerStarted","Data":"a8f67435e76558c7bb5d48dff123094e7bb799489311a0f0f873f86a2babe6f1"} Jan 27 19:00:50 crc kubenswrapper[4853]: I0127 19:00:50.989518 4853 scope.go:117] "RemoveContainer" containerID="d581c2cf187447e442242d897c2c7b42867b6ccebaac702becb297eb08c60dd9" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.017518 4853 scope.go:117] "RemoveContainer" containerID="1ec240d75c96676d139c74b3fd2c2e73e336a4a534e5a639bb6b4a2f59c9ba2e" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.051450 4853 scope.go:117] "RemoveContainer" containerID="b536d9286a81735ddefe0c7015afc49f34eae1789f194627a8a052ba184b8bd1" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.092022 4853 scope.go:117] "RemoveContainer" containerID="2430fcf03b8a43a49f359548627efbb5f0eb110307f1bf47ac68a1185e82e3bc" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.158383 4853 scope.go:117] "RemoveContainer" containerID="aa91b36a604ab59007d1904b22f3ee35da626ca5979dec847a44d9bd89b48c9a" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.261029 4853 scope.go:117] "RemoveContainer" containerID="8b2023bd66ff9349a1b712e8efedede50d982e90a208c380d91ce33701b15ba0" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.347497 4853 scope.go:117] "RemoveContainer" containerID="7b9e25dd8951aba8d4aae2d3a6fcc8cb21e0080a8226a4815f40343ffe16e9da" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.465635 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-sg-core-conf-yaml\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526657 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-scripts\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526745 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-run-httpd\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526779 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slfzv\" (UniqueName: \"kubernetes.io/projected/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-kube-api-access-slfzv\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526841 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-config-data\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526889 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-combined-ca-bundle\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.526942 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-log-httpd\") pod \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\" (UID: \"afc78a65-bfa6-42ff-a84a-f90dd740ffbf\") " Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.529038 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.529327 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.537217 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-kube-api-access-slfzv" (OuterVolumeSpecName: "kube-api-access-slfzv") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "kube-api-access-slfzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.546331 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-scripts" (OuterVolumeSpecName: "scripts") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.588899 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.598591 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.631400 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.631452 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.631467 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.631481 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.631493 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slfzv\" (UniqueName: \"kubernetes.io/projected/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-kube-api-access-slfzv\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.631505 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.655272 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-config-data" (OuterVolumeSpecName: "config-data") pod "afc78a65-bfa6-42ff-a84a-f90dd740ffbf" (UID: "afc78a65-bfa6-42ff-a84a-f90dd740ffbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.659621 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.666445 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pmx5z"] Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.690766 4853 scope.go:117] "RemoveContainer" containerID="70a07f68d1290bc4fdd19f8574adc1330f703630dc442e020d19ba65038dbd43" Jan 27 19:00:51 crc kubenswrapper[4853]: W0127 19:00:51.712478 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda521c7e1_86b3_4e0e_88f0_ac4ab1ce0b57.slice/crio-94123f69954076c48b8f9ee38a57a76be76ef68864cd4876747a58bba21ae374 WatchSource:0}: Error finding container 94123f69954076c48b8f9ee38a57a76be76ef68864cd4876747a58bba21ae374: Status 404 returned error can't find the container with id 94123f69954076c48b8f9ee38a57a76be76ef68864cd4876747a58bba21ae374 Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.732780 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc78a65-bfa6-42ff-a84a-f90dd740ffbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4853]: I0127 19:00:51.827085 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.045872 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" event={"ID":"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57","Type":"ContainerStarted","Data":"94123f69954076c48b8f9ee38a57a76be76ef68864cd4876747a58bba21ae374"} Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.051008 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64c8bd57d9-g88k8" event={"ID":"911dc005-42f8-4086-9ee9-04490f7120f4","Type":"ContainerStarted","Data":"3f2e00f2da4e874eceb1ae3ab1e39e3d6ee4df547c1a7e609664452820152d7c"} Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.051260 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.055593 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-546564f86b-jnwdt" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:60798->10.217.0.164:9311: read: connection reset by peer" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.055705 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-546564f86b-jnwdt" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:60808->10.217.0.164:9311: read: connection reset by peer" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.060938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcdd7aa2-c7bb-468a-9690-e2e505e394d3","Type":"ContainerStarted","Data":"4a0967e1d20ab879d09feb0676c7e035ee9a9a850b9bb3b0ab2e3aae84ed9c24"} Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.066204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afc78a65-bfa6-42ff-a84a-f90dd740ffbf","Type":"ContainerDied","Data":"e32ee15220f7d891f3df54753a53f9fb6b061f6089387558882f9812b6e18926"} Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.066288 4853 scope.go:117] "RemoveContainer" containerID="c6933b3f54bf3b2cb983bbe27bdf372710fd86945bce99facf0bb5bc8a73734b" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.066326 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.091968 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64c8bd57d9-g88k8" podStartSLOduration=9.091939988 podStartE2EDuration="9.091939988s" podCreationTimestamp="2026-01-27 19:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:52.082636264 +0000 UTC m=+1094.545179137" watchObservedRunningTime="2026-01-27 19:00:52.091939988 +0000 UTC m=+1094.554482871" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.163368 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e83bfd-c7f7-4795-9ae3-81a358092c4e" path="/var/lib/kubelet/pods/07e83bfd-c7f7-4795-9ae3-81a358092c4e/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.164587 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127380a8-99a3-455d-becd-78835af33867" path="/var/lib/kubelet/pods/127380a8-99a3-455d-becd-78835af33867/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.167479 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c93d763-a677-4df8-9846-5fa96f76e0ab" path="/var/lib/kubelet/pods/9c93d763-a677-4df8-9846-5fa96f76e0ab/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.168774 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b62f23c7-d81a-4925-a2c3-10c410912a0f" path="/var/lib/kubelet/pods/b62f23c7-d81a-4925-a2c3-10c410912a0f/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.171907 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba985127-2044-4a64-af56-ac3452f6f939" path="/var/lib/kubelet/pods/ba985127-2044-4a64-af56-ac3452f6f939/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.172856 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd236b6c-6a86-4c6a-8e4a-f2a459943780" path="/var/lib/kubelet/pods/cd236b6c-6a86-4c6a-8e4a-f2a459943780/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.173567 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e556ea12-6992-4aba-be03-e6d4a2823b74" path="/var/lib/kubelet/pods/e556ea12-6992-4aba-be03-e6d4a2823b74/volumes" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.192048 4853 scope.go:117] "RemoveContainer" containerID="07c01fa323e543321032a669c321880305b06ed4ab384d27ea8a800bdbcdc348" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.209931 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.245258 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.255823 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:00:52 crc kubenswrapper[4853]: E0127 19:00:52.256574 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="sg-core" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.256612 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="sg-core" Jan 27 19:00:52 crc kubenswrapper[4853]: E0127 19:00:52.256632 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="ceilometer-notification-agent" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.256639 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="ceilometer-notification-agent" Jan 27 19:00:52 crc kubenswrapper[4853]: E0127 19:00:52.256692 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="proxy-httpd" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.256702 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="proxy-httpd" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.256970 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="ceilometer-notification-agent" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.257011 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="proxy-httpd" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.257032 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" containerName="sg-core" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.259921 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.262848 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.263756 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.273400 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.286410 4853 scope.go:117] "RemoveContainer" containerID="0c718b1ee29e92b02c677d336336d3b0e5835da55281777ff39e4f1a10cd46ef" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357554 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-config-data\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357624 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357705 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-log-httpd\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357744 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfc7\" (UniqueName: \"kubernetes.io/projected/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-kube-api-access-pbfc7\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357790 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-run-httpd\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357823 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-scripts\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.357856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.461728 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-config-data\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.462331 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.462399 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-log-httpd\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.462437 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbfc7\" (UniqueName: \"kubernetes.io/projected/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-kube-api-access-pbfc7\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.462490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-run-httpd\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.462570 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-scripts\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.462604 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.466147 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-run-httpd\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.466410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-log-httpd\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.470676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-scripts\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.481868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.482493 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbfc7\" (UniqueName: \"kubernetes.io/projected/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-kube-api-access-pbfc7\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.482804 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-config-data\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.487036 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.596917 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.740270 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.881386 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f37e546-2a7e-49b7-9a9c-0191a746c289-logs\") pod \"8f37e546-2a7e-49b7-9a9c-0191a746c289\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.881569 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data-custom\") pod \"8f37e546-2a7e-49b7-9a9c-0191a746c289\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.881928 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-combined-ca-bundle\") pod \"8f37e546-2a7e-49b7-9a9c-0191a746c289\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.882011 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc8k4\" (UniqueName: \"kubernetes.io/projected/8f37e546-2a7e-49b7-9a9c-0191a746c289-kube-api-access-wc8k4\") pod \"8f37e546-2a7e-49b7-9a9c-0191a746c289\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.882038 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data\") pod \"8f37e546-2a7e-49b7-9a9c-0191a746c289\" (UID: \"8f37e546-2a7e-49b7-9a9c-0191a746c289\") " Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.882314 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f37e546-2a7e-49b7-9a9c-0191a746c289-logs" (OuterVolumeSpecName: "logs") pod "8f37e546-2a7e-49b7-9a9c-0191a746c289" (UID: "8f37e546-2a7e-49b7-9a9c-0191a746c289"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.882708 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f37e546-2a7e-49b7-9a9c-0191a746c289-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.892557 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f37e546-2a7e-49b7-9a9c-0191a746c289" (UID: "8f37e546-2a7e-49b7-9a9c-0191a746c289"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.893560 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f37e546-2a7e-49b7-9a9c-0191a746c289-kube-api-access-wc8k4" (OuterVolumeSpecName: "kube-api-access-wc8k4") pod "8f37e546-2a7e-49b7-9a9c-0191a746c289" (UID: "8f37e546-2a7e-49b7-9a9c-0191a746c289"). InnerVolumeSpecName "kube-api-access-wc8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.914333 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f37e546-2a7e-49b7-9a9c-0191a746c289" (UID: "8f37e546-2a7e-49b7-9a9c-0191a746c289"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.939374 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data" (OuterVolumeSpecName: "config-data") pod "8f37e546-2a7e-49b7-9a9c-0191a746c289" (UID: "8f37e546-2a7e-49b7-9a9c-0191a746c289"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.984812 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.984861 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.984875 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc8k4\" (UniqueName: \"kubernetes.io/projected/8f37e546-2a7e-49b7-9a9c-0191a746c289-kube-api-access-wc8k4\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4853]: I0127 19:00:52.984890 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f37e546-2a7e-49b7-9a9c-0191a746c289-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.099410 4853 generic.go:334] "Generic (PLEG): container finished" podID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerID="eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861" exitCode=0 Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.099521 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-546564f86b-jnwdt" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.101864 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-546564f86b-jnwdt" event={"ID":"8f37e546-2a7e-49b7-9a9c-0191a746c289","Type":"ContainerDied","Data":"eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861"} Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.101961 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-546564f86b-jnwdt" event={"ID":"8f37e546-2a7e-49b7-9a9c-0191a746c289","Type":"ContainerDied","Data":"06da8aafdb1c2bce7ab0d1fc314eedc6876fba629ede418fcee08847ad135c52"} Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.102013 4853 scope.go:117] "RemoveContainer" containerID="eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.111682 4853 generic.go:334] "Generic (PLEG): container finished" podID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerID="90536b57141c0d90e63ef3353094cda0744cdf31e36462da054fefa4255395f7" exitCode=0 Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.111833 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" event={"ID":"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57","Type":"ContainerDied","Data":"90536b57141c0d90e63ef3353094cda0744cdf31e36462da054fefa4255395f7"} Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.118101 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcdd7aa2-c7bb-468a-9690-e2e505e394d3","Type":"ContainerStarted","Data":"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37"} Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.123838 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.130819 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"54c6dba3-18e8-4a4a-9437-a413c96cbcbb","Type":"ContainerStarted","Data":"77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed"} Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.183210 4853 scope.go:117] "RemoveContainer" containerID="d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.190800 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-546564f86b-jnwdt"] Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.210343 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-546564f86b-jnwdt"] Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.299711 4853 scope.go:117] "RemoveContainer" containerID="eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861" Jan 27 19:00:53 crc kubenswrapper[4853]: E0127 19:00:53.300835 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861\": container with ID starting with eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861 not found: ID does not exist" containerID="eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.300886 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861"} err="failed to get container status \"eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861\": rpc error: code = NotFound desc = could not find container \"eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861\": container with ID starting with eea9963ff8825503eceafb11cb75e09a8fe51c596e538ff9d19b1feebcece861 not found: ID does not exist" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.300921 4853 scope.go:117] "RemoveContainer" containerID="d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074" Jan 27 19:00:53 crc kubenswrapper[4853]: E0127 19:00:53.301419 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074\": container with ID starting with d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074 not found: ID does not exist" containerID="d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074" Jan 27 19:00:53 crc kubenswrapper[4853]: I0127 19:00:53.301467 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074"} err="failed to get container status \"d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074\": rpc error: code = NotFound desc = could not find container \"d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074\": container with ID starting with d4571628bde20943deff9627c39c470a95c4d0d4a6b0fd3237d4da346de77074 not found: ID does not exist" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.147806 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" path="/var/lib/kubelet/pods/8f37e546-2a7e-49b7-9a9c-0191a746c289/volumes" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.149211 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc78a65-bfa6-42ff-a84a-f90dd740ffbf" path="/var/lib/kubelet/pods/afc78a65-bfa6-42ff-a84a-f90dd740ffbf/volumes" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.200022 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" event={"ID":"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57","Type":"ContainerStarted","Data":"08c4c8ae80259e015c6b73521eb30be7f6dd14be3ead040f4185ea34a3a0fb47"} Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.200997 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.237004 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerStarted","Data":"b0f1de49477da0960f573688da9426d94eea5b774548deb1b4e6dc26418736a6"} Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.237405 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerStarted","Data":"9095ed05b09890d69ff720528603146c7a8ee6a30498185f11a2090a63da22ef"} Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.258894 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" podStartSLOduration=5.258864921 podStartE2EDuration="5.258864921s" podCreationTimestamp="2026-01-27 19:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:54.254560629 +0000 UTC m=+1096.717103512" watchObservedRunningTime="2026-01-27 19:00:54.258864921 +0000 UTC m=+1096.721407804" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.285305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcdd7aa2-c7bb-468a-9690-e2e505e394d3","Type":"ContainerStarted","Data":"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb"} Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.285528 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api-log" containerID="cri-o://ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37" gracePeriod=30 Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.285899 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.285940 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api" containerID="cri-o://8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb" gracePeriod=30 Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.295092 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"54c6dba3-18e8-4a4a-9437-a413c96cbcbb","Type":"ContainerStarted","Data":"5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2"} Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.374514 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.374480456 podStartE2EDuration="4.374480456s" podCreationTimestamp="2026-01-27 19:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:54.329589301 +0000 UTC m=+1096.792132184" watchObservedRunningTime="2026-01-27 19:00:54.374480456 +0000 UTC m=+1096.837023339" Jan 27 19:00:54 crc kubenswrapper[4853]: I0127 19:00:54.380204 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.442861584 podStartE2EDuration="5.380189169s" podCreationTimestamp="2026-01-27 19:00:49 +0000 UTC" firstStartedPulling="2026-01-27 19:00:50.945529282 +0000 UTC m=+1093.408072165" lastFinishedPulling="2026-01-27 19:00:51.882856867 +0000 UTC m=+1094.345399750" observedRunningTime="2026-01-27 19:00:54.359796829 +0000 UTC m=+1096.822339712" watchObservedRunningTime="2026-01-27 19:00:54.380189169 +0000 UTC m=+1096.842732052" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.255815 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.325928 4853 generic.go:334] "Generic (PLEG): container finished" podID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerID="8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb" exitCode=0 Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.325964 4853 generic.go:334] "Generic (PLEG): container finished" podID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerID="ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37" exitCode=143 Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.326023 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcdd7aa2-c7bb-468a-9690-e2e505e394d3","Type":"ContainerDied","Data":"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb"} Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.326057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcdd7aa2-c7bb-468a-9690-e2e505e394d3","Type":"ContainerDied","Data":"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37"} Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.326067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcdd7aa2-c7bb-468a-9690-e2e505e394d3","Type":"ContainerDied","Data":"4a0967e1d20ab879d09feb0676c7e035ee9a9a850b9bb3b0ab2e3aae84ed9c24"} Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.326084 4853 scope.go:117] "RemoveContainer" containerID="8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.326405 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.338483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerStarted","Data":"dd4430f67c0376398093c9ae151b0cacf9cb3c26d11448e2269eb00d1ee4deb3"} Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.377683 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.380690 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-logs\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.380847 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-etc-machine-id\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.380888 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.380908 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data-custom\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.381015 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkx99\" (UniqueName: \"kubernetes.io/projected/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-kube-api-access-rkx99\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.381054 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-combined-ca-bundle\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.381101 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-scripts\") pod \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\" (UID: \"dcdd7aa2-c7bb-468a-9690-e2e505e394d3\") " Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.381906 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-logs" (OuterVolumeSpecName: "logs") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.381377 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.382703 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.382728 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.389036 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.393334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-scripts" (OuterVolumeSpecName: "scripts") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.401441 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-kube-api-access-rkx99" (OuterVolumeSpecName: "kube-api-access-rkx99") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "kube-api-access-rkx99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.451683 4853 scope.go:117] "RemoveContainer" containerID="ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.459282 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data" (OuterVolumeSpecName: "config-data") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.467815 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcdd7aa2-c7bb-468a-9690-e2e505e394d3" (UID: "dcdd7aa2-c7bb-468a-9690-e2e505e394d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.485098 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.485217 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.485231 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkx99\" (UniqueName: \"kubernetes.io/projected/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-kube-api-access-rkx99\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.485242 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.485252 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcdd7aa2-c7bb-468a-9690-e2e505e394d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.492218 4853 scope.go:117] "RemoveContainer" containerID="8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb" Jan 27 19:00:55 crc kubenswrapper[4853]: E0127 19:00:55.492979 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb\": container with ID starting with 8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb not found: ID does not exist" containerID="8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.493037 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb"} err="failed to get container status \"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb\": rpc error: code = NotFound desc = could not find container \"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb\": container with ID starting with 8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb not found: ID does not exist" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.493078 4853 scope.go:117] "RemoveContainer" containerID="ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37" Jan 27 19:00:55 crc kubenswrapper[4853]: E0127 19:00:55.493585 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37\": container with ID starting with ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37 not found: ID does not exist" containerID="ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.493615 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37"} err="failed to get container status \"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37\": rpc error: code = NotFound desc = could not find container \"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37\": container with ID starting with ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37 not found: ID does not exist" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.493639 4853 scope.go:117] "RemoveContainer" containerID="8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.494027 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb"} err="failed to get container status \"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb\": rpc error: code = NotFound desc = could not find container \"8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb\": container with ID starting with 8b931722f1a06bdae51c100e578ae6ddb26691f74056b0dd996d592d578ec3eb not found: ID does not exist" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.494048 4853 scope.go:117] "RemoveContainer" containerID="ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.494378 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37"} err="failed to get container status \"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37\": rpc error: code = NotFound desc = could not find container \"ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37\": container with ID starting with ba8c86cd749df47f4d7df239771508105230a0c7ff99b16b3f4d7da24ba39d37 not found: ID does not exist" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.691205 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.737925 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.761642 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:55 crc kubenswrapper[4853]: E0127 19:00:55.762378 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api-log" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762401 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api-log" Jan 27 19:00:55 crc kubenswrapper[4853]: E0127 19:00:55.762442 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api-log" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762451 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api-log" Jan 27 19:00:55 crc kubenswrapper[4853]: E0127 19:00:55.762472 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762479 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api" Jan 27 19:00:55 crc kubenswrapper[4853]: E0127 19:00:55.762509 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762518 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762752 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762776 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762795 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" containerName="cinder-api-log" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.762824 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f37e546-2a7e-49b7-9a9c-0191a746c289" containerName="barbican-api-log" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.766028 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.770638 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.770974 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.775680 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.794462 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.923886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.923939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdc98336-c980-4a4c-b453-fb72f6d34185-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.923975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdc98336-c980-4a4c-b453-fb72f6d34185-logs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.924005 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.924066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.924093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnrn\" (UniqueName: \"kubernetes.io/projected/bdc98336-c980-4a4c-b453-fb72f6d34185-kube-api-access-xhnrn\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.924110 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-config-data\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.924272 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-scripts\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:55 crc kubenswrapper[4853]: I0127 19:00:55.924315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026305 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdc98336-c980-4a4c-b453-fb72f6d34185-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026415 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdc98336-c980-4a4c-b453-fb72f6d34185-logs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026449 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026494 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026549 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdc98336-c980-4a4c-b453-fb72f6d34185-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnrn\" (UniqueName: \"kubernetes.io/projected/bdc98336-c980-4a4c-b453-fb72f6d34185-kube-api-access-xhnrn\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-config-data\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.026965 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdc98336-c980-4a4c-b453-fb72f6d34185-logs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.027535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-scripts\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.027645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.038027 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.038917 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.039196 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-scripts\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.039343 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.040959 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.041500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdc98336-c980-4a4c-b453-fb72f6d34185-config-data\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.053933 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnrn\" (UniqueName: \"kubernetes.io/projected/bdc98336-c980-4a4c-b453-fb72f6d34185-kube-api-access-xhnrn\") pod \"cinder-api-0\" (UID: \"bdc98336-c980-4a4c-b453-fb72f6d34185\") " pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.112442 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.126308 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdd7aa2-c7bb-468a-9690-e2e505e394d3" path="/var/lib/kubelet/pods/dcdd7aa2-c7bb-468a-9690-e2e505e394d3/volumes" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.357808 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerStarted","Data":"ce0518c029fc26b466712e2e0d8f7bd50197b42ba8445e3cecedce480a649392"} Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.434446 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.457588 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f5866f968-d652z" Jan 27 19:00:56 crc kubenswrapper[4853]: I0127 19:00:56.636478 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:00:57 crc kubenswrapper[4853]: I0127 19:00:57.374568 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerStarted","Data":"f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5"} Jan 27 19:00:57 crc kubenswrapper[4853]: I0127 19:00:57.375088 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:00:57 crc kubenswrapper[4853]: I0127 19:00:57.377284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdc98336-c980-4a4c-b453-fb72f6d34185","Type":"ContainerStarted","Data":"6d0c94b1042369bed7badc8ebb3c3fd171df9153d36a0a10a69ae6b15b415a5d"} Jan 27 19:00:58 crc kubenswrapper[4853]: I0127 19:00:58.182979 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.675241442 podStartE2EDuration="6.182958804s" podCreationTimestamp="2026-01-27 19:00:52 +0000 UTC" firstStartedPulling="2026-01-27 19:00:53.182738944 +0000 UTC m=+1095.645281827" lastFinishedPulling="2026-01-27 19:00:56.690456306 +0000 UTC m=+1099.152999189" observedRunningTime="2026-01-27 19:00:57.411889474 +0000 UTC m=+1099.874432367" watchObservedRunningTime="2026-01-27 19:00:58.182958804 +0000 UTC m=+1100.645501687" Jan 27 19:00:58 crc kubenswrapper[4853]: I0127 19:00:58.395388 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdc98336-c980-4a4c-b453-fb72f6d34185","Type":"ContainerStarted","Data":"b13505464d754f897f9127bedf8c2e9847a5ce2c64f5523904d4f41828c19d51"} Jan 27 19:00:58 crc kubenswrapper[4853]: I0127 19:00:58.395445 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdc98336-c980-4a4c-b453-fb72f6d34185","Type":"ContainerStarted","Data":"82ad4e804c0e0ecaa1bb92bce0fbbc354915b7f21b14ba66c6cc318c2003cdf5"} Jan 27 19:00:58 crc kubenswrapper[4853]: I0127 19:00:58.395799 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 19:00:58 crc kubenswrapper[4853]: I0127 19:00:58.435769 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.435733847 podStartE2EDuration="3.435733847s" podCreationTimestamp="2026-01-27 19:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:58.420060892 +0000 UTC m=+1100.882603775" watchObservedRunningTime="2026-01-27 19:00:58.435733847 +0000 UTC m=+1100.898276730" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.144026 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492341-snj9s"] Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.150776 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.174865 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-snj9s"] Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.252783 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-combined-ca-bundle\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.252906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzpm\" (UniqueName: \"kubernetes.io/projected/7d4283a6-8ac9-4d5d-9a33-c753064f6930-kube-api-access-8wzpm\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.252998 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-fernet-keys\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.253047 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-config-data\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.356024 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-combined-ca-bundle\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.356185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wzpm\" (UniqueName: \"kubernetes.io/projected/7d4283a6-8ac9-4d5d-9a33-c753064f6930-kube-api-access-8wzpm\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.356681 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-fernet-keys\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.357489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-config-data\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.368108 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-combined-ca-bundle\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.376138 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-config-data\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.376883 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wzpm\" (UniqueName: \"kubernetes.io/projected/7d4283a6-8ac9-4d5d-9a33-c753064f6930-kube-api-access-8wzpm\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.384913 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-fernet-keys\") pod \"keystone-cron-29492341-snj9s\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.474726 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.631674 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.701106 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-b9mgv"] Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.701675 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerName="dnsmasq-dns" containerID="cri-o://ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a" gracePeriod=10 Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.728095 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 19:01:00 crc kubenswrapper[4853]: I0127 19:01:00.815862 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.055987 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-snj9s"] Jan 27 19:01:01 crc kubenswrapper[4853]: W0127 19:01:01.059890 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d4283a6_8ac9_4d5d_9a33_c753064f6930.slice/crio-97c15f7360ea1d5919ae8a477aa2de4f59ccacf587b45f698465806b0acd81b5 WatchSource:0}: Error finding container 97c15f7360ea1d5919ae8a477aa2de4f59ccacf587b45f698465806b0acd81b5: Status 404 returned error can't find the container with id 97c15f7360ea1d5919ae8a477aa2de4f59ccacf587b45f698465806b0acd81b5 Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.267136 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.398341 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-swift-storage-0\") pod \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.398408 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-nb\") pod \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.398608 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-sb\") pod \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.398670 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-svc\") pod \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.398732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-config\") pod \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.398781 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46mbw\" (UniqueName: \"kubernetes.io/projected/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-kube-api-access-46mbw\") pod \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\" (UID: \"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0\") " Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.406088 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-kube-api-access-46mbw" (OuterVolumeSpecName: "kube-api-access-46mbw") pod "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" (UID: "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0"). InnerVolumeSpecName "kube-api-access-46mbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.455002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-snj9s" event={"ID":"7d4283a6-8ac9-4d5d-9a33-c753064f6930","Type":"ContainerStarted","Data":"3d4b37f07544779c9abb0020637acf6a4f63f29e6f5e49153fdbba7ae8922eaa"} Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.455067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-snj9s" event={"ID":"7d4283a6-8ac9-4d5d-9a33-c753064f6930","Type":"ContainerStarted","Data":"97c15f7360ea1d5919ae8a477aa2de4f59ccacf587b45f698465806b0acd81b5"} Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.460723 4853 generic.go:334] "Generic (PLEG): container finished" podID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerID="ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a" exitCode=0 Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.461002 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="cinder-scheduler" containerID="cri-o://77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed" gracePeriod=30 Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.461870 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="probe" containerID="cri-o://5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2" gracePeriod=30 Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.462026 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.462074 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" event={"ID":"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0","Type":"ContainerDied","Data":"ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a"} Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.462189 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-b9mgv" event={"ID":"fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0","Type":"ContainerDied","Data":"8a6ef8bdee7def587ba03d38f8b5b894e3c39313a6c11751e423fa57f420bff4"} Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.462216 4853 scope.go:117] "RemoveContainer" containerID="ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.467291 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" (UID: "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.496271 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" (UID: "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.499378 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" (UID: "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.502612 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.502685 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.502701 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46mbw\" (UniqueName: \"kubernetes.io/projected/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-kube-api-access-46mbw\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.502715 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.505384 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492341-snj9s" podStartSLOduration=1.5053524299999999 podStartE2EDuration="1.50535243s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:01.480030921 +0000 UTC m=+1103.942573804" watchObservedRunningTime="2026-01-27 19:01:01.50535243 +0000 UTC m=+1103.967895313" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.519014 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" (UID: "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.528086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-config" (OuterVolumeSpecName: "config") pod "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" (UID: "fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.605080 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.605113 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.640744 4853 scope.go:117] "RemoveContainer" containerID="b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.662482 4853 scope.go:117] "RemoveContainer" containerID="ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a" Jan 27 19:01:01 crc kubenswrapper[4853]: E0127 19:01:01.663205 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a\": container with ID starting with ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a not found: ID does not exist" containerID="ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.663269 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a"} err="failed to get container status \"ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a\": rpc error: code = NotFound desc = could not find container \"ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a\": container with ID starting with ff8ba29018736b8f3fc56d6a908d2e3230f2928a314d3e894421fe0a4803498a not found: ID does not exist" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.663319 4853 scope.go:117] "RemoveContainer" containerID="b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb" Jan 27 19:01:01 crc kubenswrapper[4853]: E0127 19:01:01.664535 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb\": container with ID starting with b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb not found: ID does not exist" containerID="b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.664568 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb"} err="failed to get container status \"b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb\": rpc error: code = NotFound desc = could not find container \"b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb\": container with ID starting with b8b882823d8465959718df6f9bb5e9cf5be2fcc156e1ca6b6e9f9d8e70dbd8bb not found: ID does not exist" Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.807570 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-b9mgv"] Jan 27 19:01:01 crc kubenswrapper[4853]: I0127 19:01:01.815589 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-b9mgv"] Jan 27 19:01:02 crc kubenswrapper[4853]: I0127 19:01:02.034964 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54f5975d7b-jvtmz" Jan 27 19:01:02 crc kubenswrapper[4853]: I0127 19:01:02.129938 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" path="/var/lib/kubelet/pods/fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0/volumes" Jan 27 19:01:02 crc kubenswrapper[4853]: I0127 19:01:02.477472 4853 generic.go:334] "Generic (PLEG): container finished" podID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerID="5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2" exitCode=0 Jan 27 19:01:02 crc kubenswrapper[4853]: I0127 19:01:02.477533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"54c6dba3-18e8-4a4a-9437-a413c96cbcbb","Type":"ContainerDied","Data":"5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2"} Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.488481 4853 generic.go:334] "Generic (PLEG): container finished" podID="7d4283a6-8ac9-4d5d-9a33-c753064f6930" containerID="3d4b37f07544779c9abb0020637acf6a4f63f29e6f5e49153fdbba7ae8922eaa" exitCode=0 Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.488673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-snj9s" event={"ID":"7d4283a6-8ac9-4d5d-9a33-c753064f6930","Type":"ContainerDied","Data":"3d4b37f07544779c9abb0020637acf6a4f63f29e6f5e49153fdbba7ae8922eaa"} Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.652775 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 19:01:03 crc kubenswrapper[4853]: E0127 19:01:03.653602 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerName="init" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.653707 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerName="init" Jan 27 19:01:03 crc kubenswrapper[4853]: E0127 19:01:03.653832 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerName="dnsmasq-dns" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.653905 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerName="dnsmasq-dns" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.654587 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5f9a53-3e79-4d10-9c56-18cf2c2a3de0" containerName="dnsmasq-dns" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.655324 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.657916 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.658065 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.658368 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-m2jcj" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.663343 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.755685 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57e7a062-e8a4-457a-909c-7f7922327a1e-openstack-config\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.755801 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftt9\" (UniqueName: \"kubernetes.io/projected/57e7a062-e8a4-457a-909c-7f7922327a1e-kube-api-access-mftt9\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.755841 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e7a062-e8a4-457a-909c-7f7922327a1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.755871 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57e7a062-e8a4-457a-909c-7f7922327a1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.857534 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57e7a062-e8a4-457a-909c-7f7922327a1e-openstack-config\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.858062 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mftt9\" (UniqueName: \"kubernetes.io/projected/57e7a062-e8a4-457a-909c-7f7922327a1e-kube-api-access-mftt9\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.858195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e7a062-e8a4-457a-909c-7f7922327a1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.858285 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57e7a062-e8a4-457a-909c-7f7922327a1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.858824 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57e7a062-e8a4-457a-909c-7f7922327a1e-openstack-config\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.866503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57e7a062-e8a4-457a-909c-7f7922327a1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.867411 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e7a062-e8a4-457a-909c-7f7922327a1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.877229 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mftt9\" (UniqueName: \"kubernetes.io/projected/57e7a062-e8a4-457a-909c-7f7922327a1e-kube-api-access-mftt9\") pod \"openstackclient\" (UID: \"57e7a062-e8a4-457a-909c-7f7922327a1e\") " pod="openstack/openstackclient" Jan 27 19:01:03 crc kubenswrapper[4853]: I0127 19:01:03.975384 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.495317 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.521699 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"57e7a062-e8a4-457a-909c-7f7922327a1e","Type":"ContainerStarted","Data":"4e1608a2260973cfb30d91f282dc8d6cc573c41c60eb45337f4604836bdebb4e"} Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.853700 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.880889 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-config-data\") pod \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.880993 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-combined-ca-bundle\") pod \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.881291 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wzpm\" (UniqueName: \"kubernetes.io/projected/7d4283a6-8ac9-4d5d-9a33-c753064f6930-kube-api-access-8wzpm\") pod \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.881372 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-fernet-keys\") pod \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\" (UID: \"7d4283a6-8ac9-4d5d-9a33-c753064f6930\") " Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.906598 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4283a6-8ac9-4d5d-9a33-c753064f6930-kube-api-access-8wzpm" (OuterVolumeSpecName: "kube-api-access-8wzpm") pod "7d4283a6-8ac9-4d5d-9a33-c753064f6930" (UID: "7d4283a6-8ac9-4d5d-9a33-c753064f6930"). InnerVolumeSpecName "kube-api-access-8wzpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.922442 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7d4283a6-8ac9-4d5d-9a33-c753064f6930" (UID: "7d4283a6-8ac9-4d5d-9a33-c753064f6930"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.932305 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d4283a6-8ac9-4d5d-9a33-c753064f6930" (UID: "7d4283a6-8ac9-4d5d-9a33-c753064f6930"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.997369 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wzpm\" (UniqueName: \"kubernetes.io/projected/7d4283a6-8ac9-4d5d-9a33-c753064f6930-kube-api-access-8wzpm\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.997407 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4853]: I0127 19:01:04.997422 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:04.999743 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-config-data" (OuterVolumeSpecName: "config-data") pod "7d4283a6-8ac9-4d5d-9a33-c753064f6930" (UID: "7d4283a6-8ac9-4d5d-9a33-c753064f6930"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.099914 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4283a6-8ac9-4d5d-9a33-c753064f6930-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.109132 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.201829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-combined-ca-bundle\") pod \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.202102 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngblg\" (UniqueName: \"kubernetes.io/projected/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-kube-api-access-ngblg\") pod \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.202172 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-etc-machine-id\") pod \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.202230 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data\") pod \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.202282 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data-custom\") pod \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.202370 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-scripts\") pod \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\" (UID: \"54c6dba3-18e8-4a4a-9437-a413c96cbcbb\") " Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.204426 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "54c6dba3-18e8-4a4a-9437-a413c96cbcbb" (UID: "54c6dba3-18e8-4a4a-9437-a413c96cbcbb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.210383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54c6dba3-18e8-4a4a-9437-a413c96cbcbb" (UID: "54c6dba3-18e8-4a4a-9437-a413c96cbcbb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.214305 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-kube-api-access-ngblg" (OuterVolumeSpecName: "kube-api-access-ngblg") pod "54c6dba3-18e8-4a4a-9437-a413c96cbcbb" (UID: "54c6dba3-18e8-4a4a-9437-a413c96cbcbb"). InnerVolumeSpecName "kube-api-access-ngblg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.223471 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-scripts" (OuterVolumeSpecName: "scripts") pod "54c6dba3-18e8-4a4a-9437-a413c96cbcbb" (UID: "54c6dba3-18e8-4a4a-9437-a413c96cbcbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.285752 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54c6dba3-18e8-4a4a-9437-a413c96cbcbb" (UID: "54c6dba3-18e8-4a4a-9437-a413c96cbcbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.305796 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.305827 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngblg\" (UniqueName: \"kubernetes.io/projected/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-kube-api-access-ngblg\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.305839 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.305848 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.305857 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.307214 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data" (OuterVolumeSpecName: "config-data") pod "54c6dba3-18e8-4a4a-9437-a413c96cbcbb" (UID: "54c6dba3-18e8-4a4a-9437-a413c96cbcbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.408495 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54c6dba3-18e8-4a4a-9437-a413c96cbcbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.541261 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.541347 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.544000 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-snj9s" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.544053 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-snj9s" event={"ID":"7d4283a6-8ac9-4d5d-9a33-c753064f6930","Type":"ContainerDied","Data":"97c15f7360ea1d5919ae8a477aa2de4f59ccacf587b45f698465806b0acd81b5"} Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.544131 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c15f7360ea1d5919ae8a477aa2de4f59ccacf587b45f698465806b0acd81b5" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.554533 4853 generic.go:334] "Generic (PLEG): container finished" podID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerID="77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed" exitCode=0 Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.554589 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"54c6dba3-18e8-4a4a-9437-a413c96cbcbb","Type":"ContainerDied","Data":"77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed"} Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.554627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"54c6dba3-18e8-4a4a-9437-a413c96cbcbb","Type":"ContainerDied","Data":"a8bfd4b546eb4e4c9c431ec68ca4f73b2578bd7f1a96a9b098ed2e15a61439a8"} Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.554656 4853 scope.go:117] "RemoveContainer" containerID="5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.554817 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.653869 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.654319 4853 scope.go:117] "RemoveContainer" containerID="77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.670417 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.680400 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:05 crc kubenswrapper[4853]: E0127 19:01:05.680809 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="probe" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.680828 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="probe" Jan 27 19:01:05 crc kubenswrapper[4853]: E0127 19:01:05.680847 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4283a6-8ac9-4d5d-9a33-c753064f6930" containerName="keystone-cron" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.680853 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4283a6-8ac9-4d5d-9a33-c753064f6930" containerName="keystone-cron" Jan 27 19:01:05 crc kubenswrapper[4853]: E0127 19:01:05.680868 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="cinder-scheduler" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.680873 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="cinder-scheduler" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.681067 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="cinder-scheduler" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.681089 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4283a6-8ac9-4d5d-9a33-c753064f6930" containerName="keystone-cron" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.681102 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" containerName="probe" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.683701 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.690780 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.714703 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-scripts\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.714804 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.714854 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-config-data\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.714887 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46c9\" (UniqueName: \"kubernetes.io/projected/13916d35-368a-417b-bfea-4f82d71797c3-kube-api-access-c46c9\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.714941 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13916d35-368a-417b-bfea-4f82d71797c3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.715017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.725052 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.765081 4853 scope.go:117] "RemoveContainer" containerID="5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2" Jan 27 19:01:05 crc kubenswrapper[4853]: E0127 19:01:05.765574 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2\": container with ID starting with 5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2 not found: ID does not exist" containerID="5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.765599 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2"} err="failed to get container status \"5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2\": rpc error: code = NotFound desc = could not find container \"5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2\": container with ID starting with 5c77ed1ae233815f0f8a4e56da08081c38785a3043250c36376141162a2495a2 not found: ID does not exist" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.765624 4853 scope.go:117] "RemoveContainer" containerID="77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed" Jan 27 19:01:05 crc kubenswrapper[4853]: E0127 19:01:05.765802 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed\": container with ID starting with 77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed not found: ID does not exist" containerID="77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.765820 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed"} err="failed to get container status \"77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed\": rpc error: code = NotFound desc = could not find container \"77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed\": container with ID starting with 77f9a80f0585318b6a15cb0869ecdd418e768516c431ba82121fbb6e090cd8ed not found: ID does not exist" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.819221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-config-data\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.819279 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46c9\" (UniqueName: \"kubernetes.io/projected/13916d35-368a-417b-bfea-4f82d71797c3-kube-api-access-c46c9\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.819336 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13916d35-368a-417b-bfea-4f82d71797c3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.819426 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.819455 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-scripts\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.819487 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.823234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13916d35-368a-417b-bfea-4f82d71797c3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.827418 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-config-data\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.836215 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-scripts\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.836781 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.838038 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13916d35-368a-417b-bfea-4f82d71797c3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:05 crc kubenswrapper[4853]: I0127 19:01:05.854648 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46c9\" (UniqueName: \"kubernetes.io/projected/13916d35-368a-417b-bfea-4f82d71797c3-kube-api-access-c46c9\") pod \"cinder-scheduler-0\" (UID: \"13916d35-368a-417b-bfea-4f82d71797c3\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:06 crc kubenswrapper[4853]: I0127 19:01:06.020226 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:01:06 crc kubenswrapper[4853]: I0127 19:01:06.126084 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54c6dba3-18e8-4a4a-9437-a413c96cbcbb" path="/var/lib/kubelet/pods/54c6dba3-18e8-4a4a-9437-a413c96cbcbb/volumes" Jan 27 19:01:06 crc kubenswrapper[4853]: I0127 19:01:06.529592 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:06 crc kubenswrapper[4853]: I0127 19:01:06.573728 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"13916d35-368a-417b-bfea-4f82d71797c3","Type":"ContainerStarted","Data":"b719bca6b093ea11361a95c8cbf88b7e678406b00664313a1c02b4a7c3ab077c"} Jan 27 19:01:07 crc kubenswrapper[4853]: I0127 19:01:07.651587 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"13916d35-368a-417b-bfea-4f82d71797c3","Type":"ContainerStarted","Data":"fee1480555428d8dc27f78f90ade72ad4ad92a0d0140271495c2e2da5e7f5900"} Jan 27 19:01:08 crc kubenswrapper[4853]: I0127 19:01:08.160897 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 19:01:08 crc kubenswrapper[4853]: I0127 19:01:08.668823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"13916d35-368a-417b-bfea-4f82d71797c3","Type":"ContainerStarted","Data":"e3f7d9343042e4face7b4925da3558a0c6b0f9132a04cf065195d987a6aed301"} Jan 27 19:01:08 crc kubenswrapper[4853]: I0127 19:01:08.692394 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.69237851 podStartE2EDuration="3.69237851s" podCreationTimestamp="2026-01-27 19:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:08.688262463 +0000 UTC m=+1111.150805346" watchObservedRunningTime="2026-01-27 19:01:08.69237851 +0000 UTC m=+1111.154921393" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.805022 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6dff6d999f-xr8nv"] Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.807911 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.810732 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.811505 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.814928 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.822886 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6dff6d999f-xr8nv"] Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.837730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-config-data\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.837819 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-combined-ca-bundle\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.837851 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfdg\" (UniqueName: \"kubernetes.io/projected/c029593d-ff63-4033-8bc5-39cf7e0457bd-kube-api-access-mmfdg\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.837908 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c029593d-ff63-4033-8bc5-39cf7e0457bd-log-httpd\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.837979 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-public-tls-certs\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.838027 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c029593d-ff63-4033-8bc5-39cf7e0457bd-etc-swift\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.838053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c029593d-ff63-4033-8bc5-39cf7e0457bd-run-httpd\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.838092 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-internal-tls-certs\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939602 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-internal-tls-certs\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-config-data\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939745 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-combined-ca-bundle\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939769 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfdg\" (UniqueName: \"kubernetes.io/projected/c029593d-ff63-4033-8bc5-39cf7e0457bd-kube-api-access-mmfdg\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939839 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c029593d-ff63-4033-8bc5-39cf7e0457bd-log-httpd\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939890 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-public-tls-certs\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c029593d-ff63-4033-8bc5-39cf7e0457bd-etc-swift\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.939962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c029593d-ff63-4033-8bc5-39cf7e0457bd-run-httpd\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.947203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c029593d-ff63-4033-8bc5-39cf7e0457bd-run-httpd\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.949887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c029593d-ff63-4033-8bc5-39cf7e0457bd-log-httpd\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.951091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-config-data\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.952643 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c029593d-ff63-4033-8bc5-39cf7e0457bd-etc-swift\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.953272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-internal-tls-certs\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.954142 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-public-tls-certs\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.963912 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c029593d-ff63-4033-8bc5-39cf7e0457bd-combined-ca-bundle\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.964710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfdg\" (UniqueName: \"kubernetes.io/projected/c029593d-ff63-4033-8bc5-39cf7e0457bd-kube-api-access-mmfdg\") pod \"swift-proxy-6dff6d999f-xr8nv\" (UID: \"c029593d-ff63-4033-8bc5-39cf7e0457bd\") " pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.986898 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.987253 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-log" containerID="cri-o://9f9c738e714425b0c66056ff127e63e1640ef6cefb1e2f509c89157dea2e92d3" gracePeriod=30 Jan 27 19:01:09 crc kubenswrapper[4853]: I0127 19:01:09.987563 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-httpd" containerID="cri-o://341c1171cf83f1fd83885a8f5b5ac25bcd09d3f0b8fc03543e699f18bf59ad2e" gracePeriod=30 Jan 27 19:01:10 crc kubenswrapper[4853]: I0127 19:01:10.133360 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:10 crc kubenswrapper[4853]: I0127 19:01:10.704970 4853 generic.go:334] "Generic (PLEG): container finished" podID="383a8cff-14ac-4c26-a428-302b30622b4b" containerID="9f9c738e714425b0c66056ff127e63e1640ef6cefb1e2f509c89157dea2e92d3" exitCode=143 Jan 27 19:01:10 crc kubenswrapper[4853]: I0127 19:01:10.705202 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"383a8cff-14ac-4c26-a428-302b30622b4b","Type":"ContainerDied","Data":"9f9c738e714425b0c66056ff127e63e1640ef6cefb1e2f509c89157dea2e92d3"} Jan 27 19:01:10 crc kubenswrapper[4853]: I0127 19:01:10.742082 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6dff6d999f-xr8nv"] Jan 27 19:01:10 crc kubenswrapper[4853]: W0127 19:01:10.750178 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc029593d_ff63_4033_8bc5_39cf7e0457bd.slice/crio-2d0702ae866ecf70f7c3358517c50d851c4183e835e8155de2d9b9d02d8b7bab WatchSource:0}: Error finding container 2d0702ae866ecf70f7c3358517c50d851c4183e835e8155de2d9b9d02d8b7bab: Status 404 returned error can't find the container with id 2d0702ae866ecf70f7c3358517c50d851c4183e835e8155de2d9b9d02d8b7bab Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.021186 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.723720 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dff6d999f-xr8nv" event={"ID":"c029593d-ff63-4033-8bc5-39cf7e0457bd","Type":"ContainerStarted","Data":"465e55f8968316cc44fcac4002234a70df77ed13f291aa2b313b2880a7b15af5"} Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.724312 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.724332 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dff6d999f-xr8nv" event={"ID":"c029593d-ff63-4033-8bc5-39cf7e0457bd","Type":"ContainerStarted","Data":"cf7d9c88b9a700c07f8c6da8979b9dcffc798c09d966351dfae30ba03bc0ea56"} Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.724344 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dff6d999f-xr8nv" event={"ID":"c029593d-ff63-4033-8bc5-39cf7e0457bd","Type":"ContainerStarted","Data":"2d0702ae866ecf70f7c3358517c50d851c4183e835e8155de2d9b9d02d8b7bab"} Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.724357 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:11 crc kubenswrapper[4853]: I0127 19:01:11.747842 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6dff6d999f-xr8nv" podStartSLOduration=2.7478202510000003 podStartE2EDuration="2.747820251s" podCreationTimestamp="2026-01-27 19:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:11.745776653 +0000 UTC m=+1114.208319546" watchObservedRunningTime="2026-01-27 19:01:11.747820251 +0000 UTC m=+1114.210363134" Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.050802 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.052536 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-central-agent" containerID="cri-o://b0f1de49477da0960f573688da9426d94eea5b774548deb1b4e6dc26418736a6" gracePeriod=30 Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.053470 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="proxy-httpd" containerID="cri-o://f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5" gracePeriod=30 Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.053530 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-notification-agent" containerID="cri-o://dd4430f67c0376398093c9ae151b0cacf9cb3c26d11448e2269eb00d1ee4deb3" gracePeriod=30 Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.053597 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="sg-core" containerID="cri-o://ce0518c029fc26b466712e2e0d8f7bd50197b42ba8445e3cecedce480a649392" gracePeriod=30 Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.059491 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.769406 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerID="ce0518c029fc26b466712e2e0d8f7bd50197b42ba8445e3cecedce480a649392" exitCode=2 Jan 27 19:01:12 crc kubenswrapper[4853]: I0127 19:01:12.769732 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerDied","Data":"ce0518c029fc26b466712e2e0d8f7bd50197b42ba8445e3cecedce480a649392"} Jan 27 19:01:13 crc kubenswrapper[4853]: I0127 19:01:13.785989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"383a8cff-14ac-4c26-a428-302b30622b4b","Type":"ContainerDied","Data":"341c1171cf83f1fd83885a8f5b5ac25bcd09d3f0b8fc03543e699f18bf59ad2e"} Jan 27 19:01:13 crc kubenswrapper[4853]: I0127 19:01:13.786079 4853 generic.go:334] "Generic (PLEG): container finished" podID="383a8cff-14ac-4c26-a428-302b30622b4b" containerID="341c1171cf83f1fd83885a8f5b5ac25bcd09d3f0b8fc03543e699f18bf59ad2e" exitCode=0 Jan 27 19:01:13 crc kubenswrapper[4853]: I0127 19:01:13.791160 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerID="f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5" exitCode=0 Jan 27 19:01:13 crc kubenswrapper[4853]: I0127 19:01:13.791221 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerID="b0f1de49477da0960f573688da9426d94eea5b774548deb1b4e6dc26418736a6" exitCode=0 Jan 27 19:01:13 crc kubenswrapper[4853]: I0127 19:01:13.791267 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerDied","Data":"f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5"} Jan 27 19:01:13 crc kubenswrapper[4853]: I0127 19:01:13.791343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerDied","Data":"b0f1de49477da0960f573688da9426d94eea5b774548deb1b4e6dc26418736a6"} Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.252342 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64c8bd57d9-g88k8" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.320063 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69cd5c4bb8-2fh98"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.320726 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69cd5c4bb8-2fh98" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-api" containerID="cri-o://f4050b05966aa33c4c1aec74cb852056999cd917161316ddc643ab32e9bf7a45" gracePeriod=30 Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.321318 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69cd5c4bb8-2fh98" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-httpd" containerID="cri-o://e12c891a4c8bae88241f3b7296bf145e29b7ea5714735b855be25c79bc090b80" gracePeriod=30 Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.697239 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xgbwz"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.698885 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.713484 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xgbwz"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.788681 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cj6jf"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.790852 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.801550 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cj6jf"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.814325 4853 generic.go:334] "Generic (PLEG): container finished" podID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerID="e12c891a4c8bae88241f3b7296bf145e29b7ea5714735b855be25c79bc090b80" exitCode=0 Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.814589 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cd5c4bb8-2fh98" event={"ID":"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e","Type":"ContainerDied","Data":"e12c891a4c8bae88241f3b7296bf145e29b7ea5714735b855be25c79bc090b80"} Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.858579 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfwq\" (UniqueName: \"kubernetes.io/projected/b989c118-b790-4364-8452-a6f3e2fa75d5-kube-api-access-pgfwq\") pod \"nova-api-db-create-xgbwz\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.858681 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b989c118-b790-4364-8452-a6f3e2fa75d5-operator-scripts\") pod \"nova-api-db-create-xgbwz\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.880677 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-h4s4v"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.882102 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.897111 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-h4s4v"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.913135 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4157-account-create-update-nqmcn"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.914667 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.919328 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.933715 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4157-account-create-update-nqmcn"] Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.963203 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmq6\" (UniqueName: \"kubernetes.io/projected/63cae008-ec5c-4e56-907b-84e3dfa274e2-kube-api-access-htmq6\") pod \"nova-cell0-db-create-cj6jf\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.963615 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfwq\" (UniqueName: \"kubernetes.io/projected/b989c118-b790-4364-8452-a6f3e2fa75d5-kube-api-access-pgfwq\") pod \"nova-api-db-create-xgbwz\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.963747 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cae008-ec5c-4e56-907b-84e3dfa274e2-operator-scripts\") pod \"nova-cell0-db-create-cj6jf\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.964500 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b989c118-b790-4364-8452-a6f3e2fa75d5-operator-scripts\") pod \"nova-api-db-create-xgbwz\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.965673 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b989c118-b790-4364-8452-a6f3e2fa75d5-operator-scripts\") pod \"nova-api-db-create-xgbwz\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:14 crc kubenswrapper[4853]: I0127 19:01:14.991951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfwq\" (UniqueName: \"kubernetes.io/projected/b989c118-b790-4364-8452-a6f3e2fa75d5-kube-api-access-pgfwq\") pod \"nova-api-db-create-xgbwz\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.028655 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.065939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xhhm\" (UniqueName: \"kubernetes.io/projected/15ceb016-348f-4b14-9f21-11d533ad51ee-kube-api-access-2xhhm\") pod \"nova-cell1-db-create-h4s4v\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.066031 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1169617c-cfd9-438b-ac93-a636384abe7c-operator-scripts\") pod \"nova-api-4157-account-create-update-nqmcn\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.066069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cae008-ec5c-4e56-907b-84e3dfa274e2-operator-scripts\") pod \"nova-cell0-db-create-cj6jf\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.066101 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5xkk\" (UniqueName: \"kubernetes.io/projected/1169617c-cfd9-438b-ac93-a636384abe7c-kube-api-access-c5xkk\") pod \"nova-api-4157-account-create-update-nqmcn\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.066177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmq6\" (UniqueName: \"kubernetes.io/projected/63cae008-ec5c-4e56-907b-84e3dfa274e2-kube-api-access-htmq6\") pod \"nova-cell0-db-create-cj6jf\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.066257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ceb016-348f-4b14-9f21-11d533ad51ee-operator-scripts\") pod \"nova-cell1-db-create-h4s4v\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.066941 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cae008-ec5c-4e56-907b-84e3dfa274e2-operator-scripts\") pod \"nova-cell0-db-create-cj6jf\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.092022 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmq6\" (UniqueName: \"kubernetes.io/projected/63cae008-ec5c-4e56-907b-84e3dfa274e2-kube-api-access-htmq6\") pod \"nova-cell0-db-create-cj6jf\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.097746 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b463-account-create-update-4drwb"] Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.099096 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.101740 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.107645 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b463-account-create-update-4drwb"] Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.114798 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.170325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ceb016-348f-4b14-9f21-11d533ad51ee-operator-scripts\") pod \"nova-cell1-db-create-h4s4v\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.170408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xhhm\" (UniqueName: \"kubernetes.io/projected/15ceb016-348f-4b14-9f21-11d533ad51ee-kube-api-access-2xhhm\") pod \"nova-cell1-db-create-h4s4v\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.170449 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1169617c-cfd9-438b-ac93-a636384abe7c-operator-scripts\") pod \"nova-api-4157-account-create-update-nqmcn\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.170480 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5xkk\" (UniqueName: \"kubernetes.io/projected/1169617c-cfd9-438b-ac93-a636384abe7c-kube-api-access-c5xkk\") pod \"nova-api-4157-account-create-update-nqmcn\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.171513 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ceb016-348f-4b14-9f21-11d533ad51ee-operator-scripts\") pod \"nova-cell1-db-create-h4s4v\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.171690 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1169617c-cfd9-438b-ac93-a636384abe7c-operator-scripts\") pod \"nova-api-4157-account-create-update-nqmcn\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.222806 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5xkk\" (UniqueName: \"kubernetes.io/projected/1169617c-cfd9-438b-ac93-a636384abe7c-kube-api-access-c5xkk\") pod \"nova-api-4157-account-create-update-nqmcn\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.225840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xhhm\" (UniqueName: \"kubernetes.io/projected/15ceb016-348f-4b14-9f21-11d533ad51ee-kube-api-access-2xhhm\") pod \"nova-cell1-db-create-h4s4v\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.237462 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.276382 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbee157b-ef42-498f-97a0-e8159be13fef-operator-scripts\") pod \"nova-cell0-b463-account-create-update-4drwb\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.276472 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgp7j\" (UniqueName: \"kubernetes.io/projected/cbee157b-ef42-498f-97a0-e8159be13fef-kube-api-access-vgp7j\") pod \"nova-cell0-b463-account-create-update-4drwb\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.353028 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-27f9-account-create-update-wt8ts"] Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.354598 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.361526 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.363822 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-27f9-account-create-update-wt8ts"] Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.366574 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.381575 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgp7j\" (UniqueName: \"kubernetes.io/projected/cbee157b-ef42-498f-97a0-e8159be13fef-kube-api-access-vgp7j\") pod \"nova-cell0-b463-account-create-update-4drwb\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.381768 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbee157b-ef42-498f-97a0-e8159be13fef-operator-scripts\") pod \"nova-cell0-b463-account-create-update-4drwb\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.382582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbee157b-ef42-498f-97a0-e8159be13fef-operator-scripts\") pod \"nova-cell0-b463-account-create-update-4drwb\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.415464 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgp7j\" (UniqueName: \"kubernetes.io/projected/cbee157b-ef42-498f-97a0-e8159be13fef-kube-api-access-vgp7j\") pod \"nova-cell0-b463-account-create-update-4drwb\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.479300 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.483845 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnh2\" (UniqueName: \"kubernetes.io/projected/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-kube-api-access-fvnh2\") pod \"nova-cell1-27f9-account-create-update-wt8ts\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.484162 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-operator-scripts\") pod \"nova-cell1-27f9-account-create-update-wt8ts\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.515922 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.589774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-operator-scripts\") pod \"nova-cell1-27f9-account-create-update-wt8ts\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.590204 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnh2\" (UniqueName: \"kubernetes.io/projected/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-kube-api-access-fvnh2\") pod \"nova-cell1-27f9-account-create-update-wt8ts\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.595240 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-operator-scripts\") pod \"nova-cell1-27f9-account-create-update-wt8ts\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.609328 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnh2\" (UniqueName: \"kubernetes.io/projected/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-kube-api-access-fvnh2\") pod \"nova-cell1-27f9-account-create-update-wt8ts\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:15 crc kubenswrapper[4853]: I0127 19:01:15.683651 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.034465 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.036491 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-log" containerID="cri-o://8a4cb28c4954303191103cf0fab536e482dde2773b7cbbd9a5dd9878f87ab734" gracePeriod=30 Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.036650 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-httpd" containerID="cri-o://f6bdaca7c5c803233f482527f300d355910196b8505907012cce304a5b68d07b" gracePeriod=30 Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.437907 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.849851 4853 generic.go:334] "Generic (PLEG): container finished" podID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerID="5eb06c79644ed85292d51f529bc88f05f3d36c0c73a7d7ccd7b435ebbe58e251" exitCode=137 Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.849914 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerDied","Data":"5eb06c79644ed85292d51f529bc88f05f3d36c0c73a7d7ccd7b435ebbe58e251"} Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.853277 4853 generic.go:334] "Generic (PLEG): container finished" podID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerID="8a4cb28c4954303191103cf0fab536e482dde2773b7cbbd9a5dd9878f87ab734" exitCode=143 Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.853369 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1447491c-5e5b-412d-9cbd-b7bdc9a87797","Type":"ContainerDied","Data":"8a4cb28c4954303191103cf0fab536e482dde2773b7cbbd9a5dd9878f87ab734"} Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.861372 4853 generic.go:334] "Generic (PLEG): container finished" podID="66d621f7-387b-470d-8e42-bebbfada3bbc" containerID="3e74acd3091e36c067f9363770b8672147f720192965c341275042fc68c2d916" exitCode=137 Jan 27 19:01:16 crc kubenswrapper[4853]: I0127 19:01:16.861435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69967664fb-pbqhr" event={"ID":"66d621f7-387b-470d-8e42-bebbfada3bbc","Type":"ContainerDied","Data":"3e74acd3091e36c067f9363770b8672147f720192965c341275042fc68c2d916"} Jan 27 19:01:17 crc kubenswrapper[4853]: I0127 19:01:17.878680 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerID="dd4430f67c0376398093c9ae151b0cacf9cb3c26d11448e2269eb00d1ee4deb3" exitCode=0 Jan 27 19:01:17 crc kubenswrapper[4853]: I0127 19:01:17.878778 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerDied","Data":"dd4430f67c0376398093c9ae151b0cacf9cb3c26d11448e2269eb00d1ee4deb3"} Jan 27 19:01:18 crc kubenswrapper[4853]: I0127 19:01:18.888324 4853 generic.go:334] "Generic (PLEG): container finished" podID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerID="f4050b05966aa33c4c1aec74cb852056999cd917161316ddc643ab32e9bf7a45" exitCode=0 Jan 27 19:01:18 crc kubenswrapper[4853]: I0127 19:01:18.888418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cd5c4bb8-2fh98" event={"ID":"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e","Type":"ContainerDied","Data":"f4050b05966aa33c4c1aec74cb852056999cd917161316ddc643ab32e9bf7a45"} Jan 27 19:01:19 crc kubenswrapper[4853]: I0127 19:01:19.803880 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:19 crc kubenswrapper[4853]: I0127 19:01:19.962106 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69967664fb-pbqhr" event={"ID":"66d621f7-387b-470d-8e42-bebbfada3bbc","Type":"ContainerStarted","Data":"86aa95c5d2cd9108dc90829cfe5e10a4c5e683a2cee56c3d3ea8c333f9fbb272"} Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.995356 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-log-httpd\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.996483 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.997561 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-combined-ca-bundle\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.997780 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-run-httpd\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.997890 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-scripts\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.997977 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-config-data\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.998095 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-sg-core-conf-yaml\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:19.998368 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbfc7\" (UniqueName: \"kubernetes.io/projected/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-kube-api-access-pbfc7\") pod \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\" (UID: \"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.002440 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.012133 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2562d28-8e26-44ed-84ec-92cd8b1fd1a4","Type":"ContainerDied","Data":"9095ed05b09890d69ff720528603146c7a8ee6a30498185f11a2090a63da22ef"} Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.012216 4853 scope.go:117] "RemoveContainer" containerID="f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.012416 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.013352 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.017062 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-scripts" (OuterVolumeSpecName: "scripts") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.021655 4853 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/ceilometer-0_openstack_proxy-httpd-f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5.log: no such file or directory" path="/var/log/containers/ceilometer-0_openstack_proxy-httpd-f4b0e607bc7835ac373b9eee476042469422aba86dda33483c92e8fb0cc941a5.log" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.028784 4853 generic.go:334] "Generic (PLEG): container finished" podID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerID="f6bdaca7c5c803233f482527f300d355910196b8505907012cce304a5b68d07b" exitCode=0 Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.028850 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1447491c-5e5b-412d-9cbd-b7bdc9a87797","Type":"ContainerDied","Data":"f6bdaca7c5c803233f482527f300d355910196b8505907012cce304a5b68d07b"} Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.040695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerStarted","Data":"3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2"} Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.041835 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-kube-api-access-pbfc7" (OuterVolumeSpecName: "kube-api-access-pbfc7") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "kube-api-access-pbfc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.053871 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"57e7a062-e8a4-457a-909c-7f7922327a1e","Type":"ContainerStarted","Data":"d8192fb7d816fe1d521df902b5cc9d56407b1f2cf49890abe5c1932a912c1b58"} Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.096278 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.322702881 podStartE2EDuration="17.096258474s" podCreationTimestamp="2026-01-27 19:01:03 +0000 UTC" firstStartedPulling="2026-01-27 19:01:04.504532772 +0000 UTC m=+1106.967075655" lastFinishedPulling="2026-01-27 19:01:19.278088365 +0000 UTC m=+1121.740631248" observedRunningTime="2026-01-27 19:01:20.093727762 +0000 UTC m=+1122.556270645" watchObservedRunningTime="2026-01-27 19:01:20.096258474 +0000 UTC m=+1122.558801357" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.102859 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.109250 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.109277 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.109288 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.109300 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbfc7\" (UniqueName: \"kubernetes.io/projected/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-kube-api-access-pbfc7\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.154510 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.196253 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-config-data" (OuterVolumeSpecName: "config-data") pod "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" (UID: "a2562d28-8e26-44ed-84ec-92cd8b1fd1a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.211761 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.211804 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.246602 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6dff6d999f-xr8nv" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.257817 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.263726 4853 scope.go:117] "RemoveContainer" containerID="ce0518c029fc26b466712e2e0d8f7bd50197b42ba8445e3cecedce480a649392" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.264679 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.301461 4853 scope.go:117] "RemoveContainer" containerID="dd4430f67c0376398093c9ae151b0cacf9cb3c26d11448e2269eb00d1ee4deb3" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.359100 4853 scope.go:117] "RemoveContainer" containerID="b0f1de49477da0960f573688da9426d94eea5b774548deb1b4e6dc26418736a6" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.376111 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-h4s4v"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.387783 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cj6jf"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417237 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417405 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-httpd-config\") pod \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417453 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-scripts\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417523 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk6t9\" (UniqueName: \"kubernetes.io/projected/1447491c-5e5b-412d-9cbd-b7bdc9a87797-kube-api-access-nk6t9\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417551 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-config\") pod \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417573 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-internal-tls-certs\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417636 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-combined-ca-bundle\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417712 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-ovndb-tls-certs\") pod \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417763 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-config-data\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417785 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-combined-ca-bundle\") pod \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417884 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-logs\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.417946 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzccl\" (UniqueName: \"kubernetes.io/projected/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-kube-api-access-gzccl\") pod \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\" (UID: \"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.419047 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-httpd-run\") pod \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\" (UID: \"1447491c-5e5b-412d-9cbd-b7bdc9a87797\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.420167 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.423645 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-logs" (OuterVolumeSpecName: "logs") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.430817 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.431879 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" (UID: "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.432791 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1447491c-5e5b-412d-9cbd-b7bdc9a87797-kube-api-access-nk6t9" (OuterVolumeSpecName: "kube-api-access-nk6t9") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "kube-api-access-nk6t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.434603 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4157-account-create-update-nqmcn"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.435896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-scripts" (OuterVolumeSpecName: "scripts") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.437400 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-kube-api-access-gzccl" (OuterVolumeSpecName: "kube-api-access-gzccl") pod "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" (UID: "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e"). InnerVolumeSpecName "kube-api-access-gzccl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.482082 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.504319 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.514292 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521727 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk6t9\" (UniqueName: \"kubernetes.io/projected/1447491c-5e5b-412d-9cbd-b7bdc9a87797-kube-api-access-nk6t9\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521757 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521768 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521780 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzccl\" (UniqueName: \"kubernetes.io/projected/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-kube-api-access-gzccl\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521788 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1447491c-5e5b-412d-9cbd-b7bdc9a87797-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521816 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521825 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.521833 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.526412 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-config" (OuterVolumeSpecName: "config") pod "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" (UID: "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.533486 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-27f9-account-create-update-wt8ts"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.557767 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.563824 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="sg-core" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.563860 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="sg-core" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.563912 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.563925 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.563945 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.563954 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.563981 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-central-agent" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564003 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-central-agent" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.564021 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-api" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564030 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-api" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.564051 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="proxy-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564059 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="proxy-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.564077 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-notification-agent" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564084 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-notification-agent" Jan 27 19:01:20 crc kubenswrapper[4853]: E0127 19:01:20.564107 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-log" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564113 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-log" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564634 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="sg-core" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564652 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-notification-agent" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564668 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="proxy-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564689 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564709 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-httpd" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564722 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" containerName="ceilometer-central-agent" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564734 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" containerName="neutron-api" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.564759 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" containerName="glance-log" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.568519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.573205 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.573643 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.625245 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.626033 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-run-httpd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632301 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632331 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632369 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-log-httpd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-config-data\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632453 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8qd\" (UniqueName: \"kubernetes.io/projected/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-kube-api-access-mc8qd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632473 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-scripts\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632526 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.632541 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.659782 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-config-data" (OuterVolumeSpecName: "config-data") pod "1447491c-5e5b-412d-9cbd-b7bdc9a87797" (UID: "1447491c-5e5b-412d-9cbd-b7bdc9a87797"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.661349 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.665862 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.696543 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" (UID: "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.725899 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" (UID: "394c98c3-7f2f-49d7-8f1a-c860eeaffb7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736586 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-config-data\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736646 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-public-tls-certs\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-httpd-run\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736700 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-logs\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736746 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-scripts\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736794 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-combined-ca-bundle\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736814 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rpzk\" (UniqueName: \"kubernetes.io/projected/383a8cff-14ac-4c26-a428-302b30622b4b-kube-api-access-7rpzk\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736836 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"383a8cff-14ac-4c26-a428-302b30622b4b\" (UID: \"383a8cff-14ac-4c26-a428-302b30622b4b\") " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-config-data\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736971 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8qd\" (UniqueName: \"kubernetes.io/projected/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-kube-api-access-mc8qd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.736998 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-scripts\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737068 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-run-httpd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737107 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737154 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737201 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-log-httpd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737269 4853 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737283 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1447491c-5e5b-412d-9cbd-b7bdc9a87797-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737294 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737306 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.737733 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-log-httpd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.740010 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-logs" (OuterVolumeSpecName: "logs") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.741346 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.754709 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-run-httpd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.755239 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.769342 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.776635 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-scripts\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.776756 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.777796 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-config-data\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.783027 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383a8cff-14ac-4c26-a428-302b30622b4b-kube-api-access-7rpzk" (OuterVolumeSpecName: "kube-api-access-7rpzk") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "kube-api-access-7rpzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.792422 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-scripts" (OuterVolumeSpecName: "scripts") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.792990 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8qd\" (UniqueName: \"kubernetes.io/projected/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-kube-api-access-mc8qd\") pod \"ceilometer-0\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " pod="openstack/ceilometer-0" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.850460 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.850502 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.850513 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/383a8cff-14ac-4c26-a428-302b30622b4b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.850524 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.850536 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rpzk\" (UniqueName: \"kubernetes.io/projected/383a8cff-14ac-4c26-a428-302b30622b4b-kube-api-access-7rpzk\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.909911 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xgbwz"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.917440 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b463-account-create-update-4drwb"] Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.948177 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.954165 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:20 crc kubenswrapper[4853]: I0127 19:01:20.958909 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.030779 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.057565 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.121327 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-h4s4v" event={"ID":"15ceb016-348f-4b14-9f21-11d533ad51ee","Type":"ContainerStarted","Data":"781e6770921c04691dfc5dcee9c4ded83169edfe2b71a6aa6e7c0413eb09d83d"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.121386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-h4s4v" event={"ID":"15ceb016-348f-4b14-9f21-11d533ad51ee","Type":"ContainerStarted","Data":"5378edad81ae18293b0621cd11ec51eda8b4861b944efb49f22bd830979b47d1"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.130599 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xgbwz" event={"ID":"b989c118-b790-4364-8452-a6f3e2fa75d5","Type":"ContainerStarted","Data":"a3e37dce53f981aea214e59ef1ae18bf6ce93bab1fcbaabbfa052c3a73d02b73"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.137302 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-config-data" (OuterVolumeSpecName: "config-data") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.138292 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "383a8cff-14ac-4c26-a428-302b30622b4b" (UID: "383a8cff-14ac-4c26-a428-302b30622b4b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.159380 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-h4s4v" podStartSLOduration=7.159357392 podStartE2EDuration="7.159357392s" podCreationTimestamp="2026-01-27 19:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:21.143114641 +0000 UTC m=+1123.605657524" watchObservedRunningTime="2026-01-27 19:01:21.159357392 +0000 UTC m=+1123.621900275" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.166462 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.166496 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/383a8cff-14ac-4c26-a428-302b30622b4b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.184719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1447491c-5e5b-412d-9cbd-b7bdc9a87797","Type":"ContainerDied","Data":"dc6ed92a4b05405666e0620e223a526c65a4c3711a55ee5a1c1405958007fd56"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.184776 4853 scope.go:117] "RemoveContainer" containerID="f6bdaca7c5c803233f482527f300d355910196b8505907012cce304a5b68d07b" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.184922 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.203171 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cd5c4bb8-2fh98" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.203627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cd5c4bb8-2fh98" event={"ID":"394c98c3-7f2f-49d7-8f1a-c860eeaffb7e","Type":"ContainerDied","Data":"ba7d764f9c81bdf6b5c692b9f74ffee844d4bd5782a91a6d3db4fa24a1556773"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.216055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"383a8cff-14ac-4c26-a428-302b30622b4b","Type":"ContainerDied","Data":"555f621cc34e0d6a8e0dd7fbcc487b80a691655790a0d273a8588d1355d9f9fe"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.216200 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.239651 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" event={"ID":"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6","Type":"ContainerStarted","Data":"c043a0af451f860cf0ca77c0823672d681071eeb24750ce3c3f2567c6b848037"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.243761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b463-account-create-update-4drwb" event={"ID":"cbee157b-ef42-498f-97a0-e8159be13fef","Type":"ContainerStarted","Data":"cf4dfe114dbded0ca80f209d96d20231b1b9b324e4b1d91a9517d919a0ce0a54"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.246213 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cj6jf" event={"ID":"63cae008-ec5c-4e56-907b-84e3dfa274e2","Type":"ContainerStarted","Data":"a98bd289b32697a16dcaded12814feb0a0d8d55feea0b3374af72e06573f88d1"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.246258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cj6jf" event={"ID":"63cae008-ec5c-4e56-907b-84e3dfa274e2","Type":"ContainerStarted","Data":"0ec0f8d073b3ebbf4edf914f9b12dacfd3ba4fac9b016a8bf9bef168cfa6072e"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.266258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4157-account-create-update-nqmcn" event={"ID":"1169617c-cfd9-438b-ac93-a636384abe7c","Type":"ContainerStarted","Data":"859b82117b3e0c722a80c5823aaa9a5e09579b4f7d3db5fa9a9e3ef29e92e982"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.266323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4157-account-create-update-nqmcn" event={"ID":"1169617c-cfd9-438b-ac93-a636384abe7c","Type":"ContainerStarted","Data":"7d2b1db2f6c1f405d2d8f68b709ea9e8b3ae57c1a0ba4957652e1338c13a7b33"} Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.279250 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" podStartSLOduration=6.279219808 podStartE2EDuration="6.279219808s" podCreationTimestamp="2026-01-27 19:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:21.255844384 +0000 UTC m=+1123.718387287" watchObservedRunningTime="2026-01-27 19:01:21.279219808 +0000 UTC m=+1123.741762691" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.302422 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-cj6jf" podStartSLOduration=7.302394137 podStartE2EDuration="7.302394137s" podCreationTimestamp="2026-01-27 19:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:21.273257189 +0000 UTC m=+1123.735800072" watchObservedRunningTime="2026-01-27 19:01:21.302394137 +0000 UTC m=+1123.764937010" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.329013 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-4157-account-create-update-nqmcn" podStartSLOduration=7.328989743 podStartE2EDuration="7.328989743s" podCreationTimestamp="2026-01-27 19:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:21.287943246 +0000 UTC m=+1123.750486149" watchObservedRunningTime="2026-01-27 19:01:21.328989743 +0000 UTC m=+1123.791532626" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.399522 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.601000 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.613214 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.621395 4853 scope.go:117] "RemoveContainer" containerID="8a4cb28c4954303191103cf0fab536e482dde2773b7cbbd9a5dd9878f87ab734" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.638752 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.653864 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: E0127 19:01:21.654365 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-log" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.654388 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-log" Jan 27 19:01:21 crc kubenswrapper[4853]: E0127 19:01:21.654428 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-httpd" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.654437 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-httpd" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.654655 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-httpd" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.654700 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" containerName="glance-log" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.655834 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.660883 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.661041 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-84jgd" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.661225 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.661248 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.670900 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.691353 4853 scope.go:117] "RemoveContainer" containerID="e12c891a4c8bae88241f3b7296bf145e29b7ea5714735b855be25c79bc090b80" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.716582 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.748088 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.775229 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69cd5c4bb8-2fh98"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.779686 4853 scope.go:117] "RemoveContainer" containerID="f4050b05966aa33c4c1aec74cb852056999cd917161316ddc643ab32e9bf7a45" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.787250 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-69cd5c4bb8-2fh98"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.797732 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.799483 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.799942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-logs\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800013 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbwnf\" (UniqueName: \"kubernetes.io/projected/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-kube-api-access-qbwnf\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800332 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800388 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800411 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.800550 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.804299 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.804410 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.816450 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.838587 4853 scope.go:117] "RemoveContainer" containerID="341c1171cf83f1fd83885a8f5b5ac25bcd09d3f0b8fc03543e699f18bf59ad2e" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.886348 4853 scope.go:117] "RemoveContainer" containerID="9f9c738e714425b0c66056ff127e63e1640ef6cefb1e2f509c89157dea2e92d3" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.902567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.902887 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.903035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.903196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwl5q\" (UniqueName: \"kubernetes.io/projected/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-kube-api-access-bwl5q\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.903362 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-logs\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.903516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbwnf\" (UniqueName: \"kubernetes.io/projected/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-kube-api-access-qbwnf\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.903676 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-logs\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.907668 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.907815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.907960 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.908070 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.908212 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.908340 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.908453 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.908585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.908745 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.904022 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-logs\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.912020 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.912776 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.913594 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.915208 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.919839 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.922811 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.930665 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbwnf\" (UniqueName: \"kubernetes.io/projected/40f9ab82-cf2e-4b60-bcfc-a41137752ef7-kube-api-access-qbwnf\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.959594 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"40f9ab82-cf2e-4b60-bcfc-a41137752ef7\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:21 crc kubenswrapper[4853]: I0127 19:01:21.998203 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.011294 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.012005 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.012084 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.012160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.012498 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.012809 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwl5q\" (UniqueName: \"kubernetes.io/projected/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-kube-api-access-bwl5q\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.013059 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-logs\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.013104 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.013198 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.013490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-logs\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.013767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.016469 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.017908 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.029541 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.030192 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.033816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwl5q\" (UniqueName: \"kubernetes.io/projected/1ea8e822-c78e-4fc2-8afe-09c0ef609d47-kube-api-access-bwl5q\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.065503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1ea8e822-c78e-4fc2-8afe-09c0ef609d47\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.134006 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1447491c-5e5b-412d-9cbd-b7bdc9a87797" path="/var/lib/kubelet/pods/1447491c-5e5b-412d-9cbd-b7bdc9a87797/volumes" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.139077 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383a8cff-14ac-4c26-a428-302b30622b4b" path="/var/lib/kubelet/pods/383a8cff-14ac-4c26-a428-302b30622b4b/volumes" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.139796 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.139951 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="394c98c3-7f2f-49d7-8f1a-c860eeaffb7e" path="/var/lib/kubelet/pods/394c98c3-7f2f-49d7-8f1a-c860eeaffb7e/volumes" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.143136 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2562d28-8e26-44ed-84ec-92cd8b1fd1a4" path="/var/lib/kubelet/pods/a2562d28-8e26-44ed-84ec-92cd8b1fd1a4/volumes" Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.287464 4853 generic.go:334] "Generic (PLEG): container finished" podID="b989c118-b790-4364-8452-a6f3e2fa75d5" containerID="7a9369de3a9d7cd8e7d1727e1e8b80da0db3ff0d5b8ee042a34f3d55e11cc52e" exitCode=0 Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.287531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xgbwz" event={"ID":"b989c118-b790-4364-8452-a6f3e2fa75d5","Type":"ContainerDied","Data":"7a9369de3a9d7cd8e7d1727e1e8b80da0db3ff0d5b8ee042a34f3d55e11cc52e"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.312363 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerStarted","Data":"682a3fe48df6a2969218fb407d67951a466bfb71f7c7c6fc4c9292d08edf9e60"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.335172 4853 generic.go:334] "Generic (PLEG): container finished" podID="6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" containerID="52317a64e68a16c8b047ca13cad075818d9744ad71bd88af6176cdc349b71664" exitCode=0 Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.335240 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" event={"ID":"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6","Type":"ContainerDied","Data":"52317a64e68a16c8b047ca13cad075818d9744ad71bd88af6176cdc349b71664"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.337661 4853 generic.go:334] "Generic (PLEG): container finished" podID="1169617c-cfd9-438b-ac93-a636384abe7c" containerID="859b82117b3e0c722a80c5823aaa9a5e09579b4f7d3db5fa9a9e3ef29e92e982" exitCode=0 Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.337763 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4157-account-create-update-nqmcn" event={"ID":"1169617c-cfd9-438b-ac93-a636384abe7c","Type":"ContainerDied","Data":"859b82117b3e0c722a80c5823aaa9a5e09579b4f7d3db5fa9a9e3ef29e92e982"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.340189 4853 generic.go:334] "Generic (PLEG): container finished" podID="cbee157b-ef42-498f-97a0-e8159be13fef" containerID="9dc4ee7658a4827b9f14f6839c7045d8c86db3a483e3c4c1d0bc1896f940cf2c" exitCode=0 Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.340237 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b463-account-create-update-4drwb" event={"ID":"cbee157b-ef42-498f-97a0-e8159be13fef","Type":"ContainerDied","Data":"9dc4ee7658a4827b9f14f6839c7045d8c86db3a483e3c4c1d0bc1896f940cf2c"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.347226 4853 generic.go:334] "Generic (PLEG): container finished" podID="15ceb016-348f-4b14-9f21-11d533ad51ee" containerID="781e6770921c04691dfc5dcee9c4ded83169edfe2b71a6aa6e7c0413eb09d83d" exitCode=0 Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.347318 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-h4s4v" event={"ID":"15ceb016-348f-4b14-9f21-11d533ad51ee","Type":"ContainerDied","Data":"781e6770921c04691dfc5dcee9c4ded83169edfe2b71a6aa6e7c0413eb09d83d"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.349584 4853 generic.go:334] "Generic (PLEG): container finished" podID="63cae008-ec5c-4e56-907b-84e3dfa274e2" containerID="a98bd289b32697a16dcaded12814feb0a0d8d55feea0b3374af72e06573f88d1" exitCode=0 Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.349657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cj6jf" event={"ID":"63cae008-ec5c-4e56-907b-84e3dfa274e2","Type":"ContainerDied","Data":"a98bd289b32697a16dcaded12814feb0a0d8d55feea0b3374af72e06573f88d1"} Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.734609 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:22 crc kubenswrapper[4853]: I0127 19:01:22.871649 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:22 crc kubenswrapper[4853]: W0127 19:01:22.883636 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ea8e822_c78e_4fc2_8afe_09c0ef609d47.slice/crio-ded837518a4e1ffa893f9548aaa7a7a4378c3e1e24afdfcb34890c4aa439347d WatchSource:0}: Error finding container ded837518a4e1ffa893f9548aaa7a7a4378c3e1e24afdfcb34890c4aa439347d: Status 404 returned error can't find the container with id ded837518a4e1ffa893f9548aaa7a7a4378c3e1e24afdfcb34890c4aa439347d Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.362212 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerStarted","Data":"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771"} Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.363922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40f9ab82-cf2e-4b60-bcfc-a41137752ef7","Type":"ContainerStarted","Data":"3d86ad01c20b0a7a64eed9a7c019513a74d6ae1fa547f87eb0b7988e99598b39"} Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.365108 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ea8e822-c78e-4fc2-8afe-09c0ef609d47","Type":"ContainerStarted","Data":"ded837518a4e1ffa893f9548aaa7a7a4378c3e1e24afdfcb34890c4aa439347d"} Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.862165 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.967293 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5xkk\" (UniqueName: \"kubernetes.io/projected/1169617c-cfd9-438b-ac93-a636384abe7c-kube-api-access-c5xkk\") pod \"1169617c-cfd9-438b-ac93-a636384abe7c\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.967485 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1169617c-cfd9-438b-ac93-a636384abe7c-operator-scripts\") pod \"1169617c-cfd9-438b-ac93-a636384abe7c\" (UID: \"1169617c-cfd9-438b-ac93-a636384abe7c\") " Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.969535 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1169617c-cfd9-438b-ac93-a636384abe7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1169617c-cfd9-438b-ac93-a636384abe7c" (UID: "1169617c-cfd9-438b-ac93-a636384abe7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:23 crc kubenswrapper[4853]: I0127 19:01:23.976207 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1169617c-cfd9-438b-ac93-a636384abe7c-kube-api-access-c5xkk" (OuterVolumeSpecName: "kube-api-access-c5xkk") pod "1169617c-cfd9-438b-ac93-a636384abe7c" (UID: "1169617c-cfd9-438b-ac93-a636384abe7c"). InnerVolumeSpecName "kube-api-access-c5xkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.069834 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5xkk\" (UniqueName: \"kubernetes.io/projected/1169617c-cfd9-438b-ac93-a636384abe7c-kube-api-access-c5xkk\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.069863 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1169617c-cfd9-438b-ac93-a636384abe7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.366640 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.395258 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.404220 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.415733 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" event={"ID":"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6","Type":"ContainerDied","Data":"c043a0af451f860cf0ca77c0823672d681071eeb24750ce3c3f2567c6b848037"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.417993 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c043a0af451f860cf0ca77c0823672d681071eeb24750ce3c3f2567c6b848037" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.416644 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27f9-account-create-update-wt8ts" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.432574 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4157-account-create-update-nqmcn" event={"ID":"1169617c-cfd9-438b-ac93-a636384abe7c","Type":"ContainerDied","Data":"7d2b1db2f6c1f405d2d8f68b709ea9e8b3ae57c1a0ba4957652e1338c13a7b33"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.432967 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2b1db2f6c1f405d2d8f68b709ea9e8b3ae57c1a0ba4957652e1338c13a7b33" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.433530 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4157-account-create-update-nqmcn" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.433797 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.443659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b463-account-create-update-4drwb" event={"ID":"cbee157b-ef42-498f-97a0-e8159be13fef","Type":"ContainerDied","Data":"cf4dfe114dbded0ca80f209d96d20231b1b9b324e4b1d91a9517d919a0ce0a54"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.444060 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4dfe114dbded0ca80f209d96d20231b1b9b324e4b1d91a9517d919a0ce0a54" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.443872 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b463-account-create-update-4drwb" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.486007 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ea8e822-c78e-4fc2-8afe-09c0ef609d47","Type":"ContainerStarted","Data":"4686bd0e94d2d9b7ea84a1bfcb8f7a0bba3ad31e343e4933ace44238e5fb77ce"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503340 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b989c118-b790-4364-8452-a6f3e2fa75d5-operator-scripts\") pod \"b989c118-b790-4364-8452-a6f3e2fa75d5\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503378 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvnh2\" (UniqueName: \"kubernetes.io/projected/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-kube-api-access-fvnh2\") pod \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgfwq\" (UniqueName: \"kubernetes.io/projected/b989c118-b790-4364-8452-a6f3e2fa75d5-kube-api-access-pgfwq\") pod \"b989c118-b790-4364-8452-a6f3e2fa75d5\" (UID: \"b989c118-b790-4364-8452-a6f3e2fa75d5\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503472 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-operator-scripts\") pod \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\" (UID: \"6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503551 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ceb016-348f-4b14-9f21-11d533ad51ee-operator-scripts\") pod \"15ceb016-348f-4b14-9f21-11d533ad51ee\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503574 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgp7j\" (UniqueName: \"kubernetes.io/projected/cbee157b-ef42-498f-97a0-e8159be13fef-kube-api-access-vgp7j\") pod \"cbee157b-ef42-498f-97a0-e8159be13fef\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503591 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbee157b-ef42-498f-97a0-e8159be13fef-operator-scripts\") pod \"cbee157b-ef42-498f-97a0-e8159be13fef\" (UID: \"cbee157b-ef42-498f-97a0-e8159be13fef\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.503642 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xhhm\" (UniqueName: \"kubernetes.io/projected/15ceb016-348f-4b14-9f21-11d533ad51ee-kube-api-access-2xhhm\") pod \"15ceb016-348f-4b14-9f21-11d533ad51ee\" (UID: \"15ceb016-348f-4b14-9f21-11d533ad51ee\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.505598 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.505841 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" (UID: "6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.506384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b989c118-b790-4364-8452-a6f3e2fa75d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b989c118-b790-4364-8452-a6f3e2fa75d5" (UID: "b989c118-b790-4364-8452-a6f3e2fa75d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.507213 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ceb016-348f-4b14-9f21-11d533ad51ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15ceb016-348f-4b14-9f21-11d533ad51ee" (UID: "15ceb016-348f-4b14-9f21-11d533ad51ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.507555 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbee157b-ef42-498f-97a0-e8159be13fef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbee157b-ef42-498f-97a0-e8159be13fef" (UID: "cbee157b-ef42-498f-97a0-e8159be13fef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.513572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-h4s4v" event={"ID":"15ceb016-348f-4b14-9f21-11d533ad51ee","Type":"ContainerDied","Data":"5378edad81ae18293b0621cd11ec51eda8b4861b944efb49f22bd830979b47d1"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.513614 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5378edad81ae18293b0621cd11ec51eda8b4861b944efb49f22bd830979b47d1" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.513661 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-h4s4v" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.525328 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ceb016-348f-4b14-9f21-11d533ad51ee-kube-api-access-2xhhm" (OuterVolumeSpecName: "kube-api-access-2xhhm") pod "15ceb016-348f-4b14-9f21-11d533ad51ee" (UID: "15ceb016-348f-4b14-9f21-11d533ad51ee"). InnerVolumeSpecName "kube-api-access-2xhhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.526766 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b989c118-b790-4364-8452-a6f3e2fa75d5-kube-api-access-pgfwq" (OuterVolumeSpecName: "kube-api-access-pgfwq") pod "b989c118-b790-4364-8452-a6f3e2fa75d5" (UID: "b989c118-b790-4364-8452-a6f3e2fa75d5"). InnerVolumeSpecName "kube-api-access-pgfwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.527720 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbee157b-ef42-498f-97a0-e8159be13fef-kube-api-access-vgp7j" (OuterVolumeSpecName: "kube-api-access-vgp7j") pod "cbee157b-ef42-498f-97a0-e8159be13fef" (UID: "cbee157b-ef42-498f-97a0-e8159be13fef"). InnerVolumeSpecName "kube-api-access-vgp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.530295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-kube-api-access-fvnh2" (OuterVolumeSpecName: "kube-api-access-fvnh2") pod "6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" (UID: "6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6"). InnerVolumeSpecName "kube-api-access-fvnh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.537294 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cj6jf" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.537442 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cj6jf" event={"ID":"63cae008-ec5c-4e56-907b-84e3dfa274e2","Type":"ContainerDied","Data":"0ec0f8d073b3ebbf4edf914f9b12dacfd3ba4fac9b016a8bf9bef168cfa6072e"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.538344 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec0f8d073b3ebbf4edf914f9b12dacfd3ba4fac9b016a8bf9bef168cfa6072e" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.578850 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xgbwz" event={"ID":"b989c118-b790-4364-8452-a6f3e2fa75d5","Type":"ContainerDied","Data":"a3e37dce53f981aea214e59ef1ae18bf6ce93bab1fcbaabbfa052c3a73d02b73"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.579041 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e37dce53f981aea214e59ef1ae18bf6ce93bab1fcbaabbfa052c3a73d02b73" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.579185 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xgbwz" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.585433 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerStarted","Data":"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958"} Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.605304 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htmq6\" (UniqueName: \"kubernetes.io/projected/63cae008-ec5c-4e56-907b-84e3dfa274e2-kube-api-access-htmq6\") pod \"63cae008-ec5c-4e56-907b-84e3dfa274e2\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.605607 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cae008-ec5c-4e56-907b-84e3dfa274e2-operator-scripts\") pod \"63cae008-ec5c-4e56-907b-84e3dfa274e2\" (UID: \"63cae008-ec5c-4e56-907b-84e3dfa274e2\") " Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.607491 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgfwq\" (UniqueName: \"kubernetes.io/projected/b989c118-b790-4364-8452-a6f3e2fa75d5-kube-api-access-pgfwq\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.607640 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.607729 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ceb016-348f-4b14-9f21-11d533ad51ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.607855 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgp7j\" (UniqueName: \"kubernetes.io/projected/cbee157b-ef42-498f-97a0-e8159be13fef-kube-api-access-vgp7j\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.607934 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbee157b-ef42-498f-97a0-e8159be13fef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.608009 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xhhm\" (UniqueName: \"kubernetes.io/projected/15ceb016-348f-4b14-9f21-11d533ad51ee-kube-api-access-2xhhm\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.608087 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b989c118-b790-4364-8452-a6f3e2fa75d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.608192 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvnh2\" (UniqueName: \"kubernetes.io/projected/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6-kube-api-access-fvnh2\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.608691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63cae008-ec5c-4e56-907b-84e3dfa274e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63cae008-ec5c-4e56-907b-84e3dfa274e2" (UID: "63cae008-ec5c-4e56-907b-84e3dfa274e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.614444 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63cae008-ec5c-4e56-907b-84e3dfa274e2-kube-api-access-htmq6" (OuterVolumeSpecName: "kube-api-access-htmq6") pod "63cae008-ec5c-4e56-907b-84e3dfa274e2" (UID: "63cae008-ec5c-4e56-907b-84e3dfa274e2"). InnerVolumeSpecName "kube-api-access-htmq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.709918 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cae008-ec5c-4e56-907b-84e3dfa274e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:24 crc kubenswrapper[4853]: I0127 19:01:24.709953 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htmq6\" (UniqueName: \"kubernetes.io/projected/63cae008-ec5c-4e56-907b-84e3dfa274e2-kube-api-access-htmq6\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:25 crc kubenswrapper[4853]: I0127 19:01:25.601399 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerStarted","Data":"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd"} Jan 27 19:01:25 crc kubenswrapper[4853]: I0127 19:01:25.605112 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40f9ab82-cf2e-4b60-bcfc-a41137752ef7","Type":"ContainerStarted","Data":"c5f76fe61f417861276b9921d70c048a9eae0d5926e8bad784ba8ff01afeadf2"} Jan 27 19:01:25 crc kubenswrapper[4853]: I0127 19:01:25.605192 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40f9ab82-cf2e-4b60-bcfc-a41137752ef7","Type":"ContainerStarted","Data":"87f508c6a39c93b67ae54f5f13a436b312741a8300403e9c8590aeb09a697748"} Jan 27 19:01:25 crc kubenswrapper[4853]: I0127 19:01:25.610980 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ea8e822-c78e-4fc2-8afe-09c0ef609d47","Type":"ContainerStarted","Data":"41f00b179a7cbd1364af3735ca8db06d4f8abb0509a1c44d97428879037962aa"} Jan 27 19:01:25 crc kubenswrapper[4853]: I0127 19:01:25.639899 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.639878578 podStartE2EDuration="4.639878578s" podCreationTimestamp="2026-01-27 19:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:25.63044982 +0000 UTC m=+1128.092992713" watchObservedRunningTime="2026-01-27 19:01:25.639878578 +0000 UTC m=+1128.102421461" Jan 27 19:01:25 crc kubenswrapper[4853]: I0127 19:01:25.658928 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.658894518 podStartE2EDuration="4.658894518s" podCreationTimestamp="2026-01-27 19:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:25.652395083 +0000 UTC m=+1128.114937986" watchObservedRunningTime="2026-01-27 19:01:25.658894518 +0000 UTC m=+1128.121437401" Jan 27 19:01:26 crc kubenswrapper[4853]: I0127 19:01:26.334827 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:01:26 crc kubenswrapper[4853]: I0127 19:01:26.334907 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:01:26 crc kubenswrapper[4853]: I0127 19:01:26.638084 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:01:26 crc kubenswrapper[4853]: I0127 19:01:26.639501 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:01:27 crc kubenswrapper[4853]: I0127 19:01:27.634917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerStarted","Data":"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9"} Jan 27 19:01:27 crc kubenswrapper[4853]: I0127 19:01:27.634981 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-central-agent" containerID="cri-o://468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" gracePeriod=30 Jan 27 19:01:27 crc kubenswrapper[4853]: I0127 19:01:27.635110 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="proxy-httpd" containerID="cri-o://ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" gracePeriod=30 Jan 27 19:01:27 crc kubenswrapper[4853]: I0127 19:01:27.635186 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="sg-core" containerID="cri-o://2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" gracePeriod=30 Jan 27 19:01:27 crc kubenswrapper[4853]: I0127 19:01:27.635236 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-notification-agent" containerID="cri-o://bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" gracePeriod=30 Jan 27 19:01:27 crc kubenswrapper[4853]: I0127 19:01:27.635443 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.398040 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.478865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-scripts\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.479240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-log-httpd\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.479430 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-combined-ca-bundle\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.479631 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.479954 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc8qd\" (UniqueName: \"kubernetes.io/projected/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-kube-api-access-mc8qd\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.480518 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-run-httpd\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.480701 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-config-data\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.480949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-sg-core-conf-yaml\") pod \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\" (UID: \"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14\") " Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.481137 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.483336 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.483507 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.483858 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-scripts" (OuterVolumeSpecName: "scripts") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.485254 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-kube-api-access-mc8qd" (OuterVolumeSpecName: "kube-api-access-mc8qd") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "kube-api-access-mc8qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.514597 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.561523 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.584980 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.585017 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.585029 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.585040 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc8qd\" (UniqueName: \"kubernetes.io/projected/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-kube-api-access-mc8qd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.610693 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-config-data" (OuterVolumeSpecName: "config-data") pod "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" (UID: "2f449f82-ea65-4bcf-9cac-ffc1da4e0d14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650695 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" exitCode=0 Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650735 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" exitCode=2 Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650767 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" exitCode=0 Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650801 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" exitCode=0 Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerDied","Data":"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9"} Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerDied","Data":"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd"} Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650954 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerDied","Data":"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958"} Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerDied","Data":"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771"} Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.650981 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f449f82-ea65-4bcf-9cac-ffc1da4e0d14","Type":"ContainerDied","Data":"682a3fe48df6a2969218fb407d67951a466bfb71f7c7c6fc4c9292d08edf9e60"} Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.651023 4853 scope.go:117] "RemoveContainer" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.651410 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.678729 4853 scope.go:117] "RemoveContainer" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.692469 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.713409 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.722030 4853 scope.go:117] "RemoveContainer" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.732850 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.750890 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751330 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ceb016-348f-4b14-9f21-11d533ad51ee" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751350 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ceb016-348f-4b14-9f21-11d533ad51ee" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751368 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-notification-agent" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751374 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-notification-agent" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751385 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1169617c-cfd9-438b-ac93-a636384abe7c" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751392 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1169617c-cfd9-438b-ac93-a636384abe7c" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751402 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="sg-core" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751408 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="sg-core" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751419 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="proxy-httpd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751425 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="proxy-httpd" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751434 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-central-agent" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751441 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-central-agent" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751458 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63cae008-ec5c-4e56-907b-84e3dfa274e2" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751464 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="63cae008-ec5c-4e56-907b-84e3dfa274e2" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751474 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751482 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751493 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b989c118-b790-4364-8452-a6f3e2fa75d5" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751499 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b989c118-b790-4364-8452-a6f3e2fa75d5" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.751511 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbee157b-ef42-498f-97a0-e8159be13fef" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751517 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbee157b-ef42-498f-97a0-e8159be13fef" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751689 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b989c118-b790-4364-8452-a6f3e2fa75d5" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751702 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ceb016-348f-4b14-9f21-11d533ad51ee" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751719 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1169617c-cfd9-438b-ac93-a636384abe7c" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751733 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751739 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="proxy-httpd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751747 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="63cae008-ec5c-4e56-907b-84e3dfa274e2" containerName="mariadb-database-create" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751754 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="sg-core" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751765 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-central-agent" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751775 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbee157b-ef42-498f-97a0-e8159be13fef" containerName="mariadb-account-create-update" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.751781 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" containerName="ceilometer-notification-agent" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.753834 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.755066 4853 scope.go:117] "RemoveContainer" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.759043 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.762449 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.762657 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.799339 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-run-httpd\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.799553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.799712 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-config-data\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.800042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.800250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-log-httpd\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.800388 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-scripts\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.800464 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nx9w\" (UniqueName: \"kubernetes.io/projected/31201c94-a603-4d86-a177-e7524cf37b05-kube-api-access-6nx9w\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.835283 4853 scope.go:117] "RemoveContainer" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.839255 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": container with ID starting with ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9 not found: ID does not exist" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.839296 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9"} err="failed to get container status \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": rpc error: code = NotFound desc = could not find container \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": container with ID starting with ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.839322 4853 scope.go:117] "RemoveContainer" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.850310 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": container with ID starting with 2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd not found: ID does not exist" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.850351 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd"} err="failed to get container status \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": rpc error: code = NotFound desc = could not find container \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": container with ID starting with 2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.850383 4853 scope.go:117] "RemoveContainer" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.854262 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": container with ID starting with bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958 not found: ID does not exist" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.854286 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958"} err="failed to get container status \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": rpc error: code = NotFound desc = could not find container \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": container with ID starting with bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.854299 4853 scope.go:117] "RemoveContainer" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" Jan 27 19:01:28 crc kubenswrapper[4853]: E0127 19:01:28.854879 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": container with ID starting with 468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771 not found: ID does not exist" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.854898 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771"} err="failed to get container status \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": rpc error: code = NotFound desc = could not find container \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": container with ID starting with 468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.854919 4853 scope.go:117] "RemoveContainer" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.855538 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9"} err="failed to get container status \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": rpc error: code = NotFound desc = could not find container \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": container with ID starting with ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.855594 4853 scope.go:117] "RemoveContainer" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.856950 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd"} err="failed to get container status \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": rpc error: code = NotFound desc = could not find container \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": container with ID starting with 2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.856984 4853 scope.go:117] "RemoveContainer" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.857968 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958"} err="failed to get container status \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": rpc error: code = NotFound desc = could not find container \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": container with ID starting with bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.857993 4853 scope.go:117] "RemoveContainer" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.858914 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771"} err="failed to get container status \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": rpc error: code = NotFound desc = could not find container \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": container with ID starting with 468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.858945 4853 scope.go:117] "RemoveContainer" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.859984 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9"} err="failed to get container status \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": rpc error: code = NotFound desc = could not find container \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": container with ID starting with ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.860005 4853 scope.go:117] "RemoveContainer" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.860504 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd"} err="failed to get container status \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": rpc error: code = NotFound desc = could not find container \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": container with ID starting with 2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.860525 4853 scope.go:117] "RemoveContainer" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.861060 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958"} err="failed to get container status \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": rpc error: code = NotFound desc = could not find container \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": container with ID starting with bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.861092 4853 scope.go:117] "RemoveContainer" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.861556 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771"} err="failed to get container status \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": rpc error: code = NotFound desc = could not find container \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": container with ID starting with 468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.861585 4853 scope.go:117] "RemoveContainer" containerID="ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.862312 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9"} err="failed to get container status \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": rpc error: code = NotFound desc = could not find container \"ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9\": container with ID starting with ac88410ee9bcef25958016230257314eaac3f2767b677be13b850c9e7e9199b9 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.862344 4853 scope.go:117] "RemoveContainer" containerID="2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.869497 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd"} err="failed to get container status \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": rpc error: code = NotFound desc = could not find container \"2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd\": container with ID starting with 2d9ae6f4f4536ee3f95ea368f0b44b2d4449ef680239223044b58090a3132afd not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.869544 4853 scope.go:117] "RemoveContainer" containerID="bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.876139 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958"} err="failed to get container status \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": rpc error: code = NotFound desc = could not find container \"bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958\": container with ID starting with bc9b11c3e66288595050ceed80a68ac935a2d366f6bcc681d1f66df405c5a958 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.876203 4853 scope.go:117] "RemoveContainer" containerID="468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.878942 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771"} err="failed to get container status \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": rpc error: code = NotFound desc = could not find container \"468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771\": container with ID starting with 468f23043a386aa9a010074825b1b59a2db93ac92cf402b27c32c05168dd1771 not found: ID does not exist" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.902026 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-run-httpd\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.902084 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.902134 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-config-data\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.902351 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.902957 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-log-httpd\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.903004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-scripts\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.903027 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nx9w\" (UniqueName: \"kubernetes.io/projected/31201c94-a603-4d86-a177-e7524cf37b05-kube-api-access-6nx9w\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.903611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-log-httpd\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.907220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-run-httpd\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.908084 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-config-data\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.908465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-scripts\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.909042 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.910211 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:28 crc kubenswrapper[4853]: I0127 19:01:28.919311 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nx9w\" (UniqueName: \"kubernetes.io/projected/31201c94-a603-4d86-a177-e7524cf37b05-kube-api-access-6nx9w\") pod \"ceilometer-0\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " pod="openstack/ceilometer-0" Jan 27 19:01:29 crc kubenswrapper[4853]: I0127 19:01:29.123842 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:29 crc kubenswrapper[4853]: I0127 19:01:29.596936 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:29 crc kubenswrapper[4853]: W0127 19:01:29.601264 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31201c94_a603_4d86_a177_e7524cf37b05.slice/crio-f3e27140c317a2fae03c05546d80b4a03d460edced2ebacb72ad081290715a74 WatchSource:0}: Error finding container f3e27140c317a2fae03c05546d80b4a03d460edced2ebacb72ad081290715a74: Status 404 returned error can't find the container with id f3e27140c317a2fae03c05546d80b4a03d460edced2ebacb72ad081290715a74 Jan 27 19:01:29 crc kubenswrapper[4853]: I0127 19:01:29.661170 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerStarted","Data":"f3e27140c317a2fae03c05546d80b4a03d460edced2ebacb72ad081290715a74"} Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.121919 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f449f82-ea65-4bcf-9cac-ffc1da4e0d14" path="/var/lib/kubelet/pods/2f449f82-ea65-4bcf-9cac-ffc1da4e0d14/volumes" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.325022 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsnwz"] Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.327099 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.328843 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-z5jqv" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.329262 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.329755 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.342693 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsnwz"] Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.436561 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbffh\" (UniqueName: \"kubernetes.io/projected/5e6fa082-0473-46ce-815c-bee7d4d2903a-kube-api-access-xbffh\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.436638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.436770 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-config-data\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.436809 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-scripts\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.538201 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-config-data\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.538518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-scripts\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.538699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbffh\" (UniqueName: \"kubernetes.io/projected/5e6fa082-0473-46ce-815c-bee7d4d2903a-kube-api-access-xbffh\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.538737 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.543711 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-scripts\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.544191 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.561944 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbffh\" (UniqueName: \"kubernetes.io/projected/5e6fa082-0473-46ce-815c-bee7d4d2903a-kube-api-access-xbffh\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.562103 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-config-data\") pod \"nova-cell0-conductor-db-sync-qsnwz\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.672969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerStarted","Data":"6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307"} Jan 27 19:01:30 crc kubenswrapper[4853]: I0127 19:01:30.677531 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:31 crc kubenswrapper[4853]: I0127 19:01:31.159250 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsnwz"] Jan 27 19:01:31 crc kubenswrapper[4853]: W0127 19:01:31.164172 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e6fa082_0473_46ce_815c_bee7d4d2903a.slice/crio-1448fd0086b2e3bfa9369e46b78635e5064a74d7dff95e99ccf16365dd8a76af WatchSource:0}: Error finding container 1448fd0086b2e3bfa9369e46b78635e5064a74d7dff95e99ccf16365dd8a76af: Status 404 returned error can't find the container with id 1448fd0086b2e3bfa9369e46b78635e5064a74d7dff95e99ccf16365dd8a76af Jan 27 19:01:31 crc kubenswrapper[4853]: I0127 19:01:31.693658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" event={"ID":"5e6fa082-0473-46ce-815c-bee7d4d2903a","Type":"ContainerStarted","Data":"1448fd0086b2e3bfa9369e46b78635e5064a74d7dff95e99ccf16365dd8a76af"} Jan 27 19:01:31 crc kubenswrapper[4853]: I0127 19:01:31.697371 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerStarted","Data":"c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896"} Jan 27 19:01:31 crc kubenswrapper[4853]: I0127 19:01:31.999923 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:31.999995 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.041727 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.055331 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.141375 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.141704 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.182454 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.198654 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.710216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerStarted","Data":"a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b"} Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.710884 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.711260 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.711305 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:32 crc kubenswrapper[4853]: I0127 19:01:32.711317 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:33 crc kubenswrapper[4853]: I0127 19:01:33.734569 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerStarted","Data":"5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c"} Jan 27 19:01:33 crc kubenswrapper[4853]: I0127 19:01:33.735440 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:01:33 crc kubenswrapper[4853]: I0127 19:01:33.780324 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.275766367 podStartE2EDuration="5.780303278s" podCreationTimestamp="2026-01-27 19:01:28 +0000 UTC" firstStartedPulling="2026-01-27 19:01:29.605019097 +0000 UTC m=+1132.067561980" lastFinishedPulling="2026-01-27 19:01:33.109556008 +0000 UTC m=+1135.572098891" observedRunningTime="2026-01-27 19:01:33.76843021 +0000 UTC m=+1136.230973093" watchObservedRunningTime="2026-01-27 19:01:33.780303278 +0000 UTC m=+1136.242846161" Jan 27 19:01:34 crc kubenswrapper[4853]: I0127 19:01:34.935088 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:01:34 crc kubenswrapper[4853]: I0127 19:01:34.936304 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:01:35 crc kubenswrapper[4853]: I0127 19:01:35.088084 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:01:35 crc kubenswrapper[4853]: I0127 19:01:35.291492 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:35 crc kubenswrapper[4853]: I0127 19:01:35.291629 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:01:35 crc kubenswrapper[4853]: I0127 19:01:35.541171 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:01:35 crc kubenswrapper[4853]: I0127 19:01:35.541234 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:01:35 crc kubenswrapper[4853]: I0127 19:01:35.570605 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:36 crc kubenswrapper[4853]: I0127 19:01:36.338756 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:01:36 crc kubenswrapper[4853]: I0127 19:01:36.639326 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69967664fb-pbqhr" podUID="66d621f7-387b-470d-8e42-bebbfada3bbc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.332724 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.332989 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-central-agent" containerID="cri-o://6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307" gracePeriod=30 Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.333111 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="proxy-httpd" containerID="cri-o://5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c" gracePeriod=30 Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.333335 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="sg-core" containerID="cri-o://a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b" gracePeriod=30 Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.333387 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-notification-agent" containerID="cri-o://c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896" gracePeriod=30 Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.803766 4853 generic.go:334] "Generic (PLEG): container finished" podID="31201c94-a603-4d86-a177-e7524cf37b05" containerID="5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c" exitCode=0 Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.803797 4853 generic.go:334] "Generic (PLEG): container finished" podID="31201c94-a603-4d86-a177-e7524cf37b05" containerID="a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b" exitCode=2 Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.803819 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerDied","Data":"5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c"} Jan 27 19:01:38 crc kubenswrapper[4853]: I0127 19:01:38.803848 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerDied","Data":"a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b"} Jan 27 19:01:38 crc kubenswrapper[4853]: E0127 19:01:38.916864 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31201c94_a603_4d86_a177_e7524cf37b05.slice/crio-conmon-5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:01:39 crc kubenswrapper[4853]: I0127 19:01:39.820344 4853 generic.go:334] "Generic (PLEG): container finished" podID="31201c94-a603-4d86-a177-e7524cf37b05" containerID="c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896" exitCode=0 Jan 27 19:01:39 crc kubenswrapper[4853]: I0127 19:01:39.820534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerDied","Data":"c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896"} Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.637929 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.736953 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-run-httpd\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.737028 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nx9w\" (UniqueName: \"kubernetes.io/projected/31201c94-a603-4d86-a177-e7524cf37b05-kube-api-access-6nx9w\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.737098 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-log-httpd\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.737255 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-sg-core-conf-yaml\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.737301 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-scripts\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.737353 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-combined-ca-bundle\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.737392 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-config-data\") pod \"31201c94-a603-4d86-a177-e7524cf37b05\" (UID: \"31201c94-a603-4d86-a177-e7524cf37b05\") " Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.738082 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.738716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.757312 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31201c94-a603-4d86-a177-e7524cf37b05-kube-api-access-6nx9w" (OuterVolumeSpecName: "kube-api-access-6nx9w") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "kube-api-access-6nx9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.760278 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-scripts" (OuterVolumeSpecName: "scripts") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.772102 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.839399 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.839446 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nx9w\" (UniqueName: \"kubernetes.io/projected/31201c94-a603-4d86-a177-e7524cf37b05-kube-api-access-6nx9w\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.839458 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31201c94-a603-4d86-a177-e7524cf37b05-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.839469 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.839481 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.848795 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.859768 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" event={"ID":"5e6fa082-0473-46ce-815c-bee7d4d2903a","Type":"ContainerStarted","Data":"1d04d40a9e612f7b6bf82fe559b7182c47fa40be83ffd7fcf2a1b937dc51b61b"} Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.870923 4853 generic.go:334] "Generic (PLEG): container finished" podID="31201c94-a603-4d86-a177-e7524cf37b05" containerID="6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307" exitCode=0 Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.870970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerDied","Data":"6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307"} Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.871004 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31201c94-a603-4d86-a177-e7524cf37b05","Type":"ContainerDied","Data":"f3e27140c317a2fae03c05546d80b4a03d460edced2ebacb72ad081290715a74"} Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.871028 4853 scope.go:117] "RemoveContainer" containerID="5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.871192 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.875409 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-config-data" (OuterVolumeSpecName: "config-data") pod "31201c94-a603-4d86-a177-e7524cf37b05" (UID: "31201c94-a603-4d86-a177-e7524cf37b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.887631 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" podStartSLOduration=2.317532854 podStartE2EDuration="12.887612043s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="2026-01-27 19:01:31.166715783 +0000 UTC m=+1133.629258666" lastFinishedPulling="2026-01-27 19:01:41.736794952 +0000 UTC m=+1144.199337855" observedRunningTime="2026-01-27 19:01:42.880432969 +0000 UTC m=+1145.342975852" watchObservedRunningTime="2026-01-27 19:01:42.887612043 +0000 UTC m=+1145.350154926" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.897680 4853 scope.go:117] "RemoveContainer" containerID="a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.917464 4853 scope.go:117] "RemoveContainer" containerID="c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.941508 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.941545 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31201c94-a603-4d86-a177-e7524cf37b05-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.944111 4853 scope.go:117] "RemoveContainer" containerID="6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.966317 4853 scope.go:117] "RemoveContainer" containerID="5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c" Jan 27 19:01:42 crc kubenswrapper[4853]: E0127 19:01:42.967070 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c\": container with ID starting with 5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c not found: ID does not exist" containerID="5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.967147 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c"} err="failed to get container status \"5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c\": rpc error: code = NotFound desc = could not find container \"5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c\": container with ID starting with 5f7cc35085a14f6d95813fe56d110166e1921afd59b8df5d90d9c14233af152c not found: ID does not exist" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.967185 4853 scope.go:117] "RemoveContainer" containerID="a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b" Jan 27 19:01:42 crc kubenswrapper[4853]: E0127 19:01:42.967631 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b\": container with ID starting with a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b not found: ID does not exist" containerID="a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.967699 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b"} err="failed to get container status \"a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b\": rpc error: code = NotFound desc = could not find container \"a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b\": container with ID starting with a2bf3e497f768a387b37098bbc99b8a7c77426a3dc8b0d2541b8e10014a5481b not found: ID does not exist" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.967733 4853 scope.go:117] "RemoveContainer" containerID="c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896" Jan 27 19:01:42 crc kubenswrapper[4853]: E0127 19:01:42.968073 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896\": container with ID starting with c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896 not found: ID does not exist" containerID="c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.968103 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896"} err="failed to get container status \"c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896\": rpc error: code = NotFound desc = could not find container \"c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896\": container with ID starting with c74069e59fff740f5349f7ec24c16e36d43101aada542edd18a20cdc88700896 not found: ID does not exist" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.968139 4853 scope.go:117] "RemoveContainer" containerID="6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307" Jan 27 19:01:42 crc kubenswrapper[4853]: E0127 19:01:42.970547 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307\": container with ID starting with 6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307 not found: ID does not exist" containerID="6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307" Jan 27 19:01:42 crc kubenswrapper[4853]: I0127 19:01:42.970581 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307"} err="failed to get container status \"6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307\": rpc error: code = NotFound desc = could not find container \"6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307\": container with ID starting with 6eec76394bfb989df4c8189f006d17864cd67ec031066a5abe444da602275307 not found: ID does not exist" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.208453 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.221475 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.244912 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:43 crc kubenswrapper[4853]: E0127 19:01:43.245489 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="proxy-httpd" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245510 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="proxy-httpd" Jan 27 19:01:43 crc kubenswrapper[4853]: E0127 19:01:43.245542 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-central-agent" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245548 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-central-agent" Jan 27 19:01:43 crc kubenswrapper[4853]: E0127 19:01:43.245562 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-notification-agent" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245570 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-notification-agent" Jan 27 19:01:43 crc kubenswrapper[4853]: E0127 19:01:43.245585 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="sg-core" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245591 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="sg-core" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245769 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-notification-agent" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245791 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="sg-core" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245806 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="ceilometer-central-agent" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.245815 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="31201c94-a603-4d86-a177-e7524cf37b05" containerName="proxy-httpd" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.248061 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.251284 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.251672 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.259337 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348598 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348660 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-run-httpd\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348693 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348738 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-scripts\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348787 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggzhq\" (UniqueName: \"kubernetes.io/projected/4e501ca5-b7d8-455e-b978-f35db402ea8a-kube-api-access-ggzhq\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348820 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.348848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-log-httpd\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.451177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-scripts\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.451530 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggzhq\" (UniqueName: \"kubernetes.io/projected/4e501ca5-b7d8-455e-b978-f35db402ea8a-kube-api-access-ggzhq\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.451646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.451750 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-log-httpd\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.451884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.452460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-run-httpd\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.452592 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.452485 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-log-httpd\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.452816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-run-httpd\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.455178 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.456211 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.457496 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.458828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-scripts\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.473417 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggzhq\" (UniqueName: \"kubernetes.io/projected/4e501ca5-b7d8-455e-b978-f35db402ea8a-kube-api-access-ggzhq\") pod \"ceilometer-0\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " pod="openstack/ceilometer-0" Jan 27 19:01:43 crc kubenswrapper[4853]: I0127 19:01:43.571051 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:44 crc kubenswrapper[4853]: I0127 19:01:44.038058 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:44 crc kubenswrapper[4853]: I0127 19:01:44.123465 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31201c94-a603-4d86-a177-e7524cf37b05" path="/var/lib/kubelet/pods/31201c94-a603-4d86-a177-e7524cf37b05/volumes" Jan 27 19:01:44 crc kubenswrapper[4853]: I0127 19:01:44.893948 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerStarted","Data":"e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e"} Jan 27 19:01:44 crc kubenswrapper[4853]: I0127 19:01:44.894361 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerStarted","Data":"b6387e5c8d57c1fe0239e42357ac3d6e8747aa59c7ef418a86257c684c3eca6d"} Jan 27 19:01:45 crc kubenswrapper[4853]: I0127 19:01:45.906211 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerStarted","Data":"5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8"} Jan 27 19:01:46 crc kubenswrapper[4853]: I0127 19:01:46.919702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerStarted","Data":"53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a"} Jan 27 19:01:47 crc kubenswrapper[4853]: I0127 19:01:47.932472 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerStarted","Data":"dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc"} Jan 27 19:01:47 crc kubenswrapper[4853]: I0127 19:01:47.934384 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:01:47 crc kubenswrapper[4853]: I0127 19:01:47.960318 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.700973139 podStartE2EDuration="4.960296894s" podCreationTimestamp="2026-01-27 19:01:43 +0000 UTC" firstStartedPulling="2026-01-27 19:01:44.044856126 +0000 UTC m=+1146.507399009" lastFinishedPulling="2026-01-27 19:01:47.304179891 +0000 UTC m=+1149.766722764" observedRunningTime="2026-01-27 19:01:47.956204477 +0000 UTC m=+1150.418747360" watchObservedRunningTime="2026-01-27 19:01:47.960296894 +0000 UTC m=+1150.422839777" Jan 27 19:01:48 crc kubenswrapper[4853]: I0127 19:01:48.626834 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:01:48 crc kubenswrapper[4853]: I0127 19:01:48.839273 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:01:50 crc kubenswrapper[4853]: I0127 19:01:50.385514 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:50 crc kubenswrapper[4853]: I0127 19:01:50.957301 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-central-agent" containerID="cri-o://e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e" gracePeriod=30 Jan 27 19:01:50 crc kubenswrapper[4853]: I0127 19:01:50.958106 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="proxy-httpd" containerID="cri-o://dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc" gracePeriod=30 Jan 27 19:01:50 crc kubenswrapper[4853]: I0127 19:01:50.958177 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="sg-core" containerID="cri-o://53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a" gracePeriod=30 Jan 27 19:01:50 crc kubenswrapper[4853]: I0127 19:01:50.958213 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-notification-agent" containerID="cri-o://5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8" gracePeriod=30 Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.314100 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69967664fb-pbqhr" Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.373363 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.456144 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c78c8d4f6-bchzm"] Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981092 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerID="dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc" exitCode=0 Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981158 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerID="53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a" exitCode=2 Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981171 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerID="5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8" exitCode=0 Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981387 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon-log" containerID="cri-o://bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4" gracePeriod=30 Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerDied","Data":"dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc"} Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981503 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerDied","Data":"53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a"} Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981516 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerDied","Data":"5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8"} Jan 27 19:01:51 crc kubenswrapper[4853]: I0127 19:01:51.981947 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" containerID="cri-o://3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2" gracePeriod=30 Jan 27 19:01:53 crc kubenswrapper[4853]: I0127 19:01:53.983950 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.000292 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerID="e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e" exitCode=0 Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.000412 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.000851 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerDied","Data":"e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e"} Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.000881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e501ca5-b7d8-455e-b978-f35db402ea8a","Type":"ContainerDied","Data":"b6387e5c8d57c1fe0239e42357ac3d6e8747aa59c7ef418a86257c684c3eca6d"} Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.000898 4853 scope.go:117] "RemoveContainer" containerID="dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.003527 4853 generic.go:334] "Generic (PLEG): container finished" podID="5e6fa082-0473-46ce-815c-bee7d4d2903a" containerID="1d04d40a9e612f7b6bf82fe559b7182c47fa40be83ffd7fcf2a1b937dc51b61b" exitCode=0 Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.003613 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" event={"ID":"5e6fa082-0473-46ce-815c-bee7d4d2903a","Type":"ContainerDied","Data":"1d04d40a9e612f7b6bf82fe559b7182c47fa40be83ffd7fcf2a1b937dc51b61b"} Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.032342 4853 scope.go:117] "RemoveContainer" containerID="53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.054665 4853 scope.go:117] "RemoveContainer" containerID="5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.071556 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-run-httpd\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.071621 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-combined-ca-bundle\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.071693 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-sg-core-conf-yaml\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.071865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggzhq\" (UniqueName: \"kubernetes.io/projected/4e501ca5-b7d8-455e-b978-f35db402ea8a-kube-api-access-ggzhq\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.071894 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-log-httpd\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.071918 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.072057 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-scripts\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.073040 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.073200 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.076618 4853 scope.go:117] "RemoveContainer" containerID="e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.078109 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e501ca5-b7d8-455e-b978-f35db402ea8a-kube-api-access-ggzhq" (OuterVolumeSpecName: "kube-api-access-ggzhq") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "kube-api-access-ggzhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.091224 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-scripts" (OuterVolumeSpecName: "scripts") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.114751 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.144576 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.173489 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data" (OuterVolumeSpecName: "config-data") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174143 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data\") pod \"4e501ca5-b7d8-455e-b978-f35db402ea8a\" (UID: \"4e501ca5-b7d8-455e-b978-f35db402ea8a\") " Jan 27 19:01:54 crc kubenswrapper[4853]: W0127 19:01:54.174327 4853 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4e501ca5-b7d8-455e-b978-f35db402ea8a/volumes/kubernetes.io~secret/config-data Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174364 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data" (OuterVolumeSpecName: "config-data") pod "4e501ca5-b7d8-455e-b978-f35db402ea8a" (UID: "4e501ca5-b7d8-455e-b978-f35db402ea8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174928 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174948 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174957 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174968 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174976 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggzhq\" (UniqueName: \"kubernetes.io/projected/4e501ca5-b7d8-455e-b978-f35db402ea8a-kube-api-access-ggzhq\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174984 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e501ca5-b7d8-455e-b978-f35db402ea8a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.174994 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e501ca5-b7d8-455e-b978-f35db402ea8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.193501 4853 scope.go:117] "RemoveContainer" containerID="dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.194231 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc\": container with ID starting with dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc not found: ID does not exist" containerID="dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.194267 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc"} err="failed to get container status \"dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc\": rpc error: code = NotFound desc = could not find container \"dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc\": container with ID starting with dbc7ac35e32b93e23bd2defecc4073d9f54ccf130669b3a2cb9dd1f72cf7b1dc not found: ID does not exist" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.194290 4853 scope.go:117] "RemoveContainer" containerID="53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.194619 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a\": container with ID starting with 53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a not found: ID does not exist" containerID="53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.194670 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a"} err="failed to get container status \"53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a\": rpc error: code = NotFound desc = could not find container \"53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a\": container with ID starting with 53dd556c664145b76cf725c833e5d80f14a4f0bd8562f548cba41896f34cb72a not found: ID does not exist" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.194704 4853 scope.go:117] "RemoveContainer" containerID="5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.194989 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8\": container with ID starting with 5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8 not found: ID does not exist" containerID="5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.195023 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8"} err="failed to get container status \"5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8\": rpc error: code = NotFound desc = could not find container \"5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8\": container with ID starting with 5e51a8d0a1f1cdbcad11037e835b88b3b51cf51f4a6ce4d4f74b00f0658090c8 not found: ID does not exist" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.195043 4853 scope.go:117] "RemoveContainer" containerID="e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.195510 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e\": container with ID starting with e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e not found: ID does not exist" containerID="e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.195536 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e"} err="failed to get container status \"e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e\": rpc error: code = NotFound desc = could not find container \"e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e\": container with ID starting with e3b993ffdec960d6aac100d101c6e0dba0cccd5dac5baa7913ff0fd4788f9d3e not found: ID does not exist" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.333599 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.340879 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357344 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.357710 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="sg-core" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357726 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="sg-core" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.357748 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-notification-agent" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357754 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-notification-agent" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.357770 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="proxy-httpd" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357776 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="proxy-httpd" Jan 27 19:01:54 crc kubenswrapper[4853]: E0127 19:01:54.357785 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-central-agent" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357792 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-central-agent" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357963 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-notification-agent" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357981 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="proxy-httpd" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.357991 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="sg-core" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.358003 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" containerName="ceilometer-central-agent" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.359826 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.364992 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.365425 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.375807 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480182 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480262 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-config-data\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480621 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-run-httpd\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480689 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-log-httpd\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480714 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgr8\" (UniqueName: \"kubernetes.io/projected/65b5c226-54b4-4d6c-a8fa-80cd157faf69-kube-api-access-cbgr8\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.480872 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-scripts\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.582782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.582845 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-config-data\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.582889 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.582932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-run-httpd\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.582955 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-log-httpd\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.582969 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgr8\" (UniqueName: \"kubernetes.io/projected/65b5c226-54b4-4d6c-a8fa-80cd157faf69-kube-api-access-cbgr8\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.583008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-scripts\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.584138 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-log-httpd\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.584258 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-run-httpd\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.587633 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-config-data\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.588220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.588341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-scripts\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.588890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.603173 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgr8\" (UniqueName: \"kubernetes.io/projected/65b5c226-54b4-4d6c-a8fa-80cd157faf69-kube-api-access-cbgr8\") pod \"ceilometer-0\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " pod="openstack/ceilometer-0" Jan 27 19:01:54 crc kubenswrapper[4853]: I0127 19:01:54.689677 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.140168 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:55 crc kubenswrapper[4853]: W0127 19:01:55.169376 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65b5c226_54b4_4d6c_a8fa_80cd157faf69.slice/crio-2dba3f88018de13c4a3cbafd4cd0fba5e8830745645feed8da98833ab6e892da WatchSource:0}: Error finding container 2dba3f88018de13c4a3cbafd4cd0fba5e8830745645feed8da98833ab6e892da: Status 404 returned error can't find the container with id 2dba3f88018de13c4a3cbafd4cd0fba5e8830745645feed8da98833ab6e892da Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.272889 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.400166 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-config-data\") pod \"5e6fa082-0473-46ce-815c-bee7d4d2903a\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.400627 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-scripts\") pod \"5e6fa082-0473-46ce-815c-bee7d4d2903a\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.400807 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-combined-ca-bundle\") pod \"5e6fa082-0473-46ce-815c-bee7d4d2903a\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.400928 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbffh\" (UniqueName: \"kubernetes.io/projected/5e6fa082-0473-46ce-815c-bee7d4d2903a-kube-api-access-xbffh\") pod \"5e6fa082-0473-46ce-815c-bee7d4d2903a\" (UID: \"5e6fa082-0473-46ce-815c-bee7d4d2903a\") " Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.405348 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6fa082-0473-46ce-815c-bee7d4d2903a-kube-api-access-xbffh" (OuterVolumeSpecName: "kube-api-access-xbffh") pod "5e6fa082-0473-46ce-815c-bee7d4d2903a" (UID: "5e6fa082-0473-46ce-815c-bee7d4d2903a"). InnerVolumeSpecName "kube-api-access-xbffh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.405734 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-scripts" (OuterVolumeSpecName: "scripts") pod "5e6fa082-0473-46ce-815c-bee7d4d2903a" (UID: "5e6fa082-0473-46ce-815c-bee7d4d2903a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.425094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e6fa082-0473-46ce-815c-bee7d4d2903a" (UID: "5e6fa082-0473-46ce-815c-bee7d4d2903a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.449280 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-config-data" (OuterVolumeSpecName: "config-data") pod "5e6fa082-0473-46ce-815c-bee7d4d2903a" (UID: "5e6fa082-0473-46ce-815c-bee7d4d2903a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.503173 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.503206 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.503215 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6fa082-0473-46ce-815c-bee7d4d2903a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:55 crc kubenswrapper[4853]: I0127 19:01:55.503226 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbffh\" (UniqueName: \"kubernetes.io/projected/5e6fa082-0473-46ce-815c-bee7d4d2903a-kube-api-access-xbffh\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.020527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerStarted","Data":"2dba3f88018de13c4a3cbafd4cd0fba5e8830745645feed8da98833ab6e892da"} Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.022506 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" event={"ID":"5e6fa082-0473-46ce-815c-bee7d4d2903a","Type":"ContainerDied","Data":"1448fd0086b2e3bfa9369e46b78635e5064a74d7dff95e99ccf16365dd8a76af"} Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.022534 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1448fd0086b2e3bfa9369e46b78635e5064a74d7dff95e99ccf16365dd8a76af" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.022594 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsnwz" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.025514 4853 generic.go:334] "Generic (PLEG): container finished" podID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerID="3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2" exitCode=0 Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.025567 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerDied","Data":"3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2"} Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.025609 4853 scope.go:117] "RemoveContainer" containerID="5eb06c79644ed85292d51f529bc88f05f3d36c0c73a7d7ccd7b435ebbe58e251" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.129460 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e501ca5-b7d8-455e-b978-f35db402ea8a" path="/var/lib/kubelet/pods/4e501ca5-b7d8-455e-b978-f35db402ea8a/volumes" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.134268 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:01:56 crc kubenswrapper[4853]: E0127 19:01:56.142350 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6fa082-0473-46ce-815c-bee7d4d2903a" containerName="nova-cell0-conductor-db-sync" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.142408 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6fa082-0473-46ce-815c-bee7d4d2903a" containerName="nova-cell0-conductor-db-sync" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.143305 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6fa082-0473-46ce-815c-bee7d4d2903a" containerName="nova-cell0-conductor-db-sync" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.144731 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.156028 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-z5jqv" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.158175 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.189660 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.214585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d238c8e7-40ad-4834-8af2-0d942d49852a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.214797 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwnnn\" (UniqueName: \"kubernetes.io/projected/d238c8e7-40ad-4834-8af2-0d942d49852a-kube-api-access-rwnnn\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.214825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d238c8e7-40ad-4834-8af2-0d942d49852a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.316589 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d238c8e7-40ad-4834-8af2-0d942d49852a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.316728 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwnnn\" (UniqueName: \"kubernetes.io/projected/d238c8e7-40ad-4834-8af2-0d942d49852a-kube-api-access-rwnnn\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.316755 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d238c8e7-40ad-4834-8af2-0d942d49852a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.323429 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d238c8e7-40ad-4834-8af2-0d942d49852a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.323850 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d238c8e7-40ad-4834-8af2-0d942d49852a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.334971 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.335277 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwnnn\" (UniqueName: \"kubernetes.io/projected/d238c8e7-40ad-4834-8af2-0d942d49852a-kube-api-access-rwnnn\") pod \"nova-cell0-conductor-0\" (UID: \"d238c8e7-40ad-4834-8af2-0d942d49852a\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.470679 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:56 crc kubenswrapper[4853]: I0127 19:01:56.939032 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:01:56 crc kubenswrapper[4853]: W0127 19:01:56.946524 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd238c8e7_40ad_4834_8af2_0d942d49852a.slice/crio-7773c7d868a12d2c243b9ae7dcd05362295c47166620c794b8e00c283f18b6f7 WatchSource:0}: Error finding container 7773c7d868a12d2c243b9ae7dcd05362295c47166620c794b8e00c283f18b6f7: Status 404 returned error can't find the container with id 7773c7d868a12d2c243b9ae7dcd05362295c47166620c794b8e00c283f18b6f7 Jan 27 19:01:57 crc kubenswrapper[4853]: I0127 19:01:57.037690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d238c8e7-40ad-4834-8af2-0d942d49852a","Type":"ContainerStarted","Data":"7773c7d868a12d2c243b9ae7dcd05362295c47166620c794b8e00c283f18b6f7"} Jan 27 19:01:58 crc kubenswrapper[4853]: I0127 19:01:58.064098 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d238c8e7-40ad-4834-8af2-0d942d49852a","Type":"ContainerStarted","Data":"0c51f66a817a93b5e6d37696a966a43d1a54cc7f6b5db6b9a11a1f750bdde3b9"} Jan 27 19:01:58 crc kubenswrapper[4853]: I0127 19:01:58.065026 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 19:01:58 crc kubenswrapper[4853]: I0127 19:01:58.067937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerStarted","Data":"3190f5c36cdfe9dd1569e22942c6238eee369c5f38fc7eef00747c380ae0ccc4"} Jan 27 19:01:58 crc kubenswrapper[4853]: I0127 19:01:58.067965 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerStarted","Data":"c919b11cc6629b38bf38a52fd0c91c5843bbf741b75ab13e9784c6e3ac444583"} Jan 27 19:01:58 crc kubenswrapper[4853]: I0127 19:01:58.081018 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.081002175 podStartE2EDuration="2.081002175s" podCreationTimestamp="2026-01-27 19:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:58.078276948 +0000 UTC m=+1160.540819831" watchObservedRunningTime="2026-01-27 19:01:58.081002175 +0000 UTC m=+1160.543545048" Jan 27 19:01:59 crc kubenswrapper[4853]: I0127 19:01:59.077735 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerStarted","Data":"b8e9c370edb9950021ccedd6bebe0338d3d3561c1d9285d140df4dffa0f84b10"} Jan 27 19:02:01 crc kubenswrapper[4853]: I0127 19:02:01.098706 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerStarted","Data":"194d31d04ddaa32ae0b25a8738de7776f0be0a85c7df225f305db8b62639b188"} Jan 27 19:02:01 crc kubenswrapper[4853]: I0127 19:02:01.099342 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:02:01 crc kubenswrapper[4853]: I0127 19:02:01.128500 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.231896673 podStartE2EDuration="7.12848076s" podCreationTimestamp="2026-01-27 19:01:54 +0000 UTC" firstStartedPulling="2026-01-27 19:01:55.185482708 +0000 UTC m=+1157.648025591" lastFinishedPulling="2026-01-27 19:02:00.082066795 +0000 UTC m=+1162.544609678" observedRunningTime="2026-01-27 19:02:01.127924624 +0000 UTC m=+1163.590467517" watchObservedRunningTime="2026-01-27 19:02:01.12848076 +0000 UTC m=+1163.591023643" Jan 27 19:02:05 crc kubenswrapper[4853]: I0127 19:02:05.541558 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:02:05 crc kubenswrapper[4853]: I0127 19:02:05.542290 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:02:05 crc kubenswrapper[4853]: I0127 19:02:05.542369 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:02:05 crc kubenswrapper[4853]: I0127 19:02:05.543266 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0719d2d74e31dba5f0b13e64100839f15049069456c6041563b2a237f331790"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:02:05 crc kubenswrapper[4853]: I0127 19:02:05.543340 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://a0719d2d74e31dba5f0b13e64100839f15049069456c6041563b2a237f331790" gracePeriod=600 Jan 27 19:02:06 crc kubenswrapper[4853]: I0127 19:02:06.164479 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="a0719d2d74e31dba5f0b13e64100839f15049069456c6041563b2a237f331790" exitCode=0 Jan 27 19:02:06 crc kubenswrapper[4853]: I0127 19:02:06.164565 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"a0719d2d74e31dba5f0b13e64100839f15049069456c6041563b2a237f331790"} Jan 27 19:02:06 crc kubenswrapper[4853]: I0127 19:02:06.164837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"2ebd72a88c0d92677bf2c3606656647e62120a28bd35c3672caa1084df04a23b"} Jan 27 19:02:06 crc kubenswrapper[4853]: I0127 19:02:06.164861 4853 scope.go:117] "RemoveContainer" containerID="31e88473416602e1651b8a73df75f161960712c5955c442cb3ea237f2fe7ca04" Jan 27 19:02:06 crc kubenswrapper[4853]: I0127 19:02:06.335200 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:02:06 crc kubenswrapper[4853]: I0127 19:02:06.518353 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.046922 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x47j6"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.048613 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.051135 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.052655 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.068980 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x47j6"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.118603 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.118927 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-config-data\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.119111 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-scripts\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.119348 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lrt\" (UniqueName: \"kubernetes.io/projected/1d74daf6-98b2-437c-8415-3053a40cedef-kube-api-access-w4lrt\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.193813 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.197025 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.202804 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.261844 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.263803 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-config-data\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.263856 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-config-data\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.263913 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-scripts\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.263940 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.263981 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lrt\" (UniqueName: \"kubernetes.io/projected/1d74daf6-98b2-437c-8415-3053a40cedef-kube-api-access-w4lrt\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.264062 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzx4z\" (UniqueName: \"kubernetes.io/projected/4fc6adfd-212b-4248-8c09-c993acc3459c-kube-api-access-nzx4z\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.264106 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc6adfd-212b-4248-8c09-c993acc3459c-logs\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.283109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.298439 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.301612 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-config-data\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.325852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-scripts\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.374897 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lrt\" (UniqueName: \"kubernetes.io/projected/1d74daf6-98b2-437c-8415-3053a40cedef-kube-api-access-w4lrt\") pod \"nova-cell0-cell-mapping-x47j6\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.386235 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.387859 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.389237 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.389330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzx4z\" (UniqueName: \"kubernetes.io/projected/4fc6adfd-212b-4248-8c09-c993acc3459c-kube-api-access-nzx4z\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.389372 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc6adfd-212b-4248-8c09-c993acc3459c-logs\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.389427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-config-data\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.391796 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc6adfd-212b-4248-8c09-c993acc3459c-logs\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.391920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.401840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.421198 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-config-data\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.428144 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzx4z\" (UniqueName: \"kubernetes.io/projected/4fc6adfd-212b-4248-8c09-c993acc3459c-kube-api-access-nzx4z\") pod \"nova-api-0\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.446722 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.456677 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.459478 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.464379 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.477104 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.497153 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.499288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9bpn\" (UniqueName: \"kubernetes.io/projected/290cb89a-f259-4224-9e20-ad509d9e8d27-kube-api-access-z9bpn\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.499356 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-config-data\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.499386 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.587033 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.594245 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.607703 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.607779 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-config-data\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.607840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.607936 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtc4n\" (UniqueName: \"kubernetes.io/projected/9b2db297-94b3-4341-b706-b8f47a596ff9-kube-api-access-wtc4n\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.608093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.608177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9bpn\" (UniqueName: \"kubernetes.io/projected/290cb89a-f259-4224-9e20-ad509d9e8d27-kube-api-access-z9bpn\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.608784 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.622321 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-config-data\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.628604 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.643870 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9bpn\" (UniqueName: \"kubernetes.io/projected/290cb89a-f259-4224-9e20-ad509d9e8d27-kube-api-access-z9bpn\") pod \"nova-scheduler-0\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.644488 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.673416 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715512 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhgpj\" (UniqueName: \"kubernetes.io/projected/20d4a916-994f-449f-9b81-82623f8f6583-kube-api-access-xhgpj\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtc4n\" (UniqueName: \"kubernetes.io/projected/9b2db297-94b3-4341-b706-b8f47a596ff9-kube-api-access-wtc4n\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715714 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715749 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-config-data\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715769 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.715802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d4a916-994f-449f-9b81-82623f8f6583-logs\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.720776 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-k7brt"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.722831 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.732879 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.736619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtc4n\" (UniqueName: \"kubernetes.io/projected/9b2db297-94b3-4341-b706-b8f47a596ff9-kube-api-access-wtc4n\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.737975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.759528 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-k7brt"] Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-config-data\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818378 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d4a916-994f-449f-9b81-82623f8f6583-logs\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818448 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-svc\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818476 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818508 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhgpj\" (UniqueName: \"kubernetes.io/projected/20d4a916-994f-449f-9b81-82623f8f6583-kube-api-access-xhgpj\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818631 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8sk\" (UniqueName: \"kubernetes.io/projected/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-kube-api-access-dj8sk\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818669 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.818695 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-config\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.821070 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d4a916-994f-449f-9b81-82623f8f6583-logs\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.826044 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.832039 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-config-data\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.839854 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.846788 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.847222 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhgpj\" (UniqueName: \"kubernetes.io/projected/20d4a916-994f-449f-9b81-82623f8f6583-kube-api-access-xhgpj\") pod \"nova-metadata-0\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " pod="openstack/nova-metadata-0" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.922471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-svc\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.922527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.922572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.922672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8sk\" (UniqueName: \"kubernetes.io/projected/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-kube-api-access-dj8sk\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.922719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-config\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.922747 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.924718 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.924934 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-svc\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.927192 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.927273 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-config\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.927379 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.942776 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8sk\" (UniqueName: \"kubernetes.io/projected/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-kube-api-access-dj8sk\") pod \"dnsmasq-dns-757b4f8459-k7brt\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:07 crc kubenswrapper[4853]: I0127 19:02:07.963313 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.051868 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.275661 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.357076 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x47j6"] Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.581408 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8nwnw"] Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.582864 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.589959 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.590365 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.591709 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8nwnw"] Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.650390 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.650467 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh4x\" (UniqueName: \"kubernetes.io/projected/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-kube-api-access-frh4x\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.650522 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-config-data\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.650609 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-scripts\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.713427 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.721024 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:08 crc kubenswrapper[4853]: W0127 19:02:08.750213 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2db297_94b3_4341_b706_b8f47a596ff9.slice/crio-41b3c19fd7904a783c4b2d983a812579849f71e9034d082809c6a7fc8bc3c620 WatchSource:0}: Error finding container 41b3c19fd7904a783c4b2d983a812579849f71e9034d082809c6a7fc8bc3c620: Status 404 returned error can't find the container with id 41b3c19fd7904a783c4b2d983a812579849f71e9034d082809c6a7fc8bc3c620 Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.752206 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.752283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frh4x\" (UniqueName: \"kubernetes.io/projected/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-kube-api-access-frh4x\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.752334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-config-data\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.752435 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-scripts\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.760675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-scripts\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.761025 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.768979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-config-data\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.770276 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.781922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frh4x\" (UniqueName: \"kubernetes.io/projected/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-kube-api-access-frh4x\") pod \"nova-cell1-conductor-db-sync-8nwnw\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: W0127 19:02:08.790274 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d4a916_994f_449f_9b81_82623f8f6583.slice/crio-5f9c6d4e244aa361496ade2762a10b0a7dbc5491f33bf2dc6244aab4b3690904 WatchSource:0}: Error finding container 5f9c6d4e244aa361496ade2762a10b0a7dbc5491f33bf2dc6244aab4b3690904: Status 404 returned error can't find the container with id 5f9c6d4e244aa361496ade2762a10b0a7dbc5491f33bf2dc6244aab4b3690904 Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.932286 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:08 crc kubenswrapper[4853]: I0127 19:02:08.991344 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-k7brt"] Jan 27 19:02:08 crc kubenswrapper[4853]: W0127 19:02:08.995080 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38b68e30_68ad_4dce_befc_98fd9c6aa1b6.slice/crio-d2426e36d0e5e26b94198c92468cec6b297a270a43ccc2672a071ef12ec37147 WatchSource:0}: Error finding container d2426e36d0e5e26b94198c92468cec6b297a270a43ccc2672a071ef12ec37147: Status 404 returned error can't find the container with id d2426e36d0e5e26b94198c92468cec6b297a270a43ccc2672a071ef12ec37147 Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.237000 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20d4a916-994f-449f-9b81-82623f8f6583","Type":"ContainerStarted","Data":"5f9c6d4e244aa361496ade2762a10b0a7dbc5491f33bf2dc6244aab4b3690904"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.240405 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"290cb89a-f259-4224-9e20-ad509d9e8d27","Type":"ContainerStarted","Data":"2da868497d4aa13f64fe746859c5a014c6be7a1db21db247fce63a1e471dc868"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.259564 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9b2db297-94b3-4341-b706-b8f47a596ff9","Type":"ContainerStarted","Data":"41b3c19fd7904a783c4b2d983a812579849f71e9034d082809c6a7fc8bc3c620"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.264078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" event={"ID":"38b68e30-68ad-4dce-befc-98fd9c6aa1b6","Type":"ContainerStarted","Data":"957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.264117 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" event={"ID":"38b68e30-68ad-4dce-befc-98fd9c6aa1b6","Type":"ContainerStarted","Data":"d2426e36d0e5e26b94198c92468cec6b297a270a43ccc2672a071ef12ec37147"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.274576 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fc6adfd-212b-4248-8c09-c993acc3459c","Type":"ContainerStarted","Data":"24ff2a8cee9f412c42bd2622e80ad7439d07c4aa223f764d6403486c3ffb2a6d"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.281560 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x47j6" event={"ID":"1d74daf6-98b2-437c-8415-3053a40cedef","Type":"ContainerStarted","Data":"9188f3bd1cadb16ee740a9cb756693c3b2a35b30a5d57553b81c9033d291ad75"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.281607 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x47j6" event={"ID":"1d74daf6-98b2-437c-8415-3053a40cedef","Type":"ContainerStarted","Data":"1ce41a98b659fce691b0578288c1448d855131ce264bc3bab46beff2cdbfcba4"} Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.301324 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x47j6" podStartSLOduration=2.30130109 podStartE2EDuration="2.30130109s" podCreationTimestamp="2026-01-27 19:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:09.299759735 +0000 UTC m=+1171.762302618" watchObservedRunningTime="2026-01-27 19:02:09.30130109 +0000 UTC m=+1171.763843973" Jan 27 19:02:09 crc kubenswrapper[4853]: I0127 19:02:09.388313 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8nwnw"] Jan 27 19:02:09 crc kubenswrapper[4853]: W0127 19:02:09.398522 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode89f7e94_9d6b_4279_b0b8_a91c47c904c8.slice/crio-02d8359431aced28e445f5a07c7755bd3aeea70e6bb8f014b0359163c6d93a7d WatchSource:0}: Error finding container 02d8359431aced28e445f5a07c7755bd3aeea70e6bb8f014b0359163c6d93a7d: Status 404 returned error can't find the container with id 02d8359431aced28e445f5a07c7755bd3aeea70e6bb8f014b0359163c6d93a7d Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.293481 4853 generic.go:334] "Generic (PLEG): container finished" podID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerID="957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1" exitCode=0 Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.293542 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" event={"ID":"38b68e30-68ad-4dce-befc-98fd9c6aa1b6","Type":"ContainerDied","Data":"957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1"} Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.299160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" event={"ID":"e89f7e94-9d6b-4279-b0b8-a91c47c904c8","Type":"ContainerStarted","Data":"e9081957ee7b8bfae736045631f2e30d73ff9f49beb4c6f1154fbcfcb2d3aaba"} Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.299228 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" event={"ID":"e89f7e94-9d6b-4279-b0b8-a91c47c904c8","Type":"ContainerStarted","Data":"02d8359431aced28e445f5a07c7755bd3aeea70e6bb8f014b0359163c6d93a7d"} Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.332641 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" podStartSLOduration=2.332620421 podStartE2EDuration="2.332620421s" podCreationTimestamp="2026-01-27 19:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:10.327333778 +0000 UTC m=+1172.789876661" watchObservedRunningTime="2026-01-27 19:02:10.332620421 +0000 UTC m=+1172.795163384" Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.904697 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:10 crc kubenswrapper[4853]: I0127 19:02:10.923576 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.337636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9b2db297-94b3-4341-b706-b8f47a596ff9","Type":"ContainerStarted","Data":"d55e691a9d4b50ee4d8ff377e74e9632490ff1000b821a8626b34afdfe60482c"} Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.337888 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9b2db297-94b3-4341-b706-b8f47a596ff9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d55e691a9d4b50ee4d8ff377e74e9632490ff1000b821a8626b34afdfe60482c" gracePeriod=30 Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.342066 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" event={"ID":"38b68e30-68ad-4dce-befc-98fd9c6aa1b6","Type":"ContainerStarted","Data":"8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f"} Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.343062 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.350719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fc6adfd-212b-4248-8c09-c993acc3459c","Type":"ContainerStarted","Data":"7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165"} Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.350770 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fc6adfd-212b-4248-8c09-c993acc3459c","Type":"ContainerStarted","Data":"155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d"} Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.353518 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20d4a916-994f-449f-9b81-82623f8f6583","Type":"ContainerStarted","Data":"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1"} Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.359205 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.856115574 podStartE2EDuration="5.359189684s" podCreationTimestamp="2026-01-27 19:02:07 +0000 UTC" firstStartedPulling="2026-01-27 19:02:08.762941801 +0000 UTC m=+1171.225484674" lastFinishedPulling="2026-01-27 19:02:11.266015901 +0000 UTC m=+1173.728558784" observedRunningTime="2026-01-27 19:02:12.358959207 +0000 UTC m=+1174.821502100" watchObservedRunningTime="2026-01-27 19:02:12.359189684 +0000 UTC m=+1174.821732567" Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.383662 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.413351479 podStartE2EDuration="5.383643468s" podCreationTimestamp="2026-01-27 19:02:07 +0000 UTC" firstStartedPulling="2026-01-27 19:02:08.295945518 +0000 UTC m=+1170.758488401" lastFinishedPulling="2026-01-27 19:02:11.266237507 +0000 UTC m=+1173.728780390" observedRunningTime="2026-01-27 19:02:12.377789779 +0000 UTC m=+1174.840332672" watchObservedRunningTime="2026-01-27 19:02:12.383643468 +0000 UTC m=+1174.846186351" Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.401181 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" podStartSLOduration=5.401111941 podStartE2EDuration="5.401111941s" podCreationTimestamp="2026-01-27 19:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:12.396304603 +0000 UTC m=+1174.858847486" watchObservedRunningTime="2026-01-27 19:02:12.401111941 +0000 UTC m=+1174.863654824" Jan 27 19:02:12 crc kubenswrapper[4853]: I0127 19:02:12.841801 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.368423 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20d4a916-994f-449f-9b81-82623f8f6583","Type":"ContainerStarted","Data":"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5"} Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.368549 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-log" containerID="cri-o://a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1" gracePeriod=30 Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.368630 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-metadata" containerID="cri-o://bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5" gracePeriod=30 Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.376453 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"290cb89a-f259-4224-9e20-ad509d9e8d27","Type":"ContainerStarted","Data":"9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a"} Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.402896 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.892246133 podStartE2EDuration="6.402855469s" podCreationTimestamp="2026-01-27 19:02:07 +0000 UTC" firstStartedPulling="2026-01-27 19:02:08.792256856 +0000 UTC m=+1171.254799739" lastFinishedPulling="2026-01-27 19:02:11.302866192 +0000 UTC m=+1173.765409075" observedRunningTime="2026-01-27 19:02:13.398185555 +0000 UTC m=+1175.860728438" watchObservedRunningTime="2026-01-27 19:02:13.402855469 +0000 UTC m=+1175.865398352" Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.424496 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.053322804 podStartE2EDuration="6.424473622s" podCreationTimestamp="2026-01-27 19:02:07 +0000 UTC" firstStartedPulling="2026-01-27 19:02:08.763490327 +0000 UTC m=+1171.226033210" lastFinishedPulling="2026-01-27 19:02:12.134641045 +0000 UTC m=+1174.597184028" observedRunningTime="2026-01-27 19:02:13.42094001 +0000 UTC m=+1175.883482903" watchObservedRunningTime="2026-01-27 19:02:13.424473622 +0000 UTC m=+1175.887016505" Jan 27 19:02:13 crc kubenswrapper[4853]: I0127 19:02:13.959184 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.094708 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d4a916-994f-449f-9b81-82623f8f6583-logs\") pod \"20d4a916-994f-449f-9b81-82623f8f6583\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.094949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhgpj\" (UniqueName: \"kubernetes.io/projected/20d4a916-994f-449f-9b81-82623f8f6583-kube-api-access-xhgpj\") pod \"20d4a916-994f-449f-9b81-82623f8f6583\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.095037 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-combined-ca-bundle\") pod \"20d4a916-994f-449f-9b81-82623f8f6583\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.095095 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-config-data\") pod \"20d4a916-994f-449f-9b81-82623f8f6583\" (UID: \"20d4a916-994f-449f-9b81-82623f8f6583\") " Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.095188 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d4a916-994f-449f-9b81-82623f8f6583-logs" (OuterVolumeSpecName: "logs") pod "20d4a916-994f-449f-9b81-82623f8f6583" (UID: "20d4a916-994f-449f-9b81-82623f8f6583"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.095735 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d4a916-994f-449f-9b81-82623f8f6583-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.102900 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d4a916-994f-449f-9b81-82623f8f6583-kube-api-access-xhgpj" (OuterVolumeSpecName: "kube-api-access-xhgpj") pod "20d4a916-994f-449f-9b81-82623f8f6583" (UID: "20d4a916-994f-449f-9b81-82623f8f6583"). InnerVolumeSpecName "kube-api-access-xhgpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.146354 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-config-data" (OuterVolumeSpecName: "config-data") pod "20d4a916-994f-449f-9b81-82623f8f6583" (UID: "20d4a916-994f-449f-9b81-82623f8f6583"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.157442 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20d4a916-994f-449f-9b81-82623f8f6583" (UID: "20d4a916-994f-449f-9b81-82623f8f6583"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.198035 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.198074 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d4a916-994f-449f-9b81-82623f8f6583-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.198083 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhgpj\" (UniqueName: \"kubernetes.io/projected/20d4a916-994f-449f-9b81-82623f8f6583-kube-api-access-xhgpj\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.394179 4853 generic.go:334] "Generic (PLEG): container finished" podID="20d4a916-994f-449f-9b81-82623f8f6583" containerID="bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5" exitCode=0 Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.396206 4853 generic.go:334] "Generic (PLEG): container finished" podID="20d4a916-994f-449f-9b81-82623f8f6583" containerID="a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1" exitCode=143 Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.396182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20d4a916-994f-449f-9b81-82623f8f6583","Type":"ContainerDied","Data":"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5"} Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.396467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20d4a916-994f-449f-9b81-82623f8f6583","Type":"ContainerDied","Data":"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1"} Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.396599 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20d4a916-994f-449f-9b81-82623f8f6583","Type":"ContainerDied","Data":"5f9c6d4e244aa361496ade2762a10b0a7dbc5491f33bf2dc6244aab4b3690904"} Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.396644 4853 scope.go:117] "RemoveContainer" containerID="bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.396161 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.429461 4853 scope.go:117] "RemoveContainer" containerID="a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.442267 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.460765 4853 scope.go:117] "RemoveContainer" containerID="bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5" Jan 27 19:02:14 crc kubenswrapper[4853]: E0127 19:02:14.461270 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5\": container with ID starting with bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5 not found: ID does not exist" containerID="bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.461302 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5"} err="failed to get container status \"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5\": rpc error: code = NotFound desc = could not find container \"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5\": container with ID starting with bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5 not found: ID does not exist" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.461325 4853 scope.go:117] "RemoveContainer" containerID="a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1" Jan 27 19:02:14 crc kubenswrapper[4853]: E0127 19:02:14.466818 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1\": container with ID starting with a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1 not found: ID does not exist" containerID="a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.466856 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1"} err="failed to get container status \"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1\": rpc error: code = NotFound desc = could not find container \"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1\": container with ID starting with a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1 not found: ID does not exist" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.466879 4853 scope.go:117] "RemoveContainer" containerID="bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.467934 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5"} err="failed to get container status \"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5\": rpc error: code = NotFound desc = could not find container \"bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5\": container with ID starting with bafef6d2010f1f919126f871c542a6c7928e6743d4483f86105f85425f97f2b5 not found: ID does not exist" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.467952 4853 scope.go:117] "RemoveContainer" containerID="a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.468322 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1"} err="failed to get container status \"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1\": rpc error: code = NotFound desc = could not find container \"a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1\": container with ID starting with a77ccf513dd637bfb7c3c36e340287ea7f8e838dd2b2024d32f67995668096c1 not found: ID does not exist" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.470552 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.491679 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:14 crc kubenswrapper[4853]: E0127 19:02:14.492259 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-metadata" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.492280 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-metadata" Jan 27 19:02:14 crc kubenswrapper[4853]: E0127 19:02:14.492310 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-log" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.492316 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-log" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.492492 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-log" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.492509 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d4a916-994f-449f-9b81-82623f8f6583" containerName="nova-metadata-metadata" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.493634 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.496753 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.497141 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.502556 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.604500 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24db9578-040c-4720-830d-cfb5cda65976-logs\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.604888 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.605048 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-config-data\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.605207 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngq4\" (UniqueName: \"kubernetes.io/projected/24db9578-040c-4720-830d-cfb5cda65976-kube-api-access-fngq4\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.605325 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.707492 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngq4\" (UniqueName: \"kubernetes.io/projected/24db9578-040c-4720-830d-cfb5cda65976-kube-api-access-fngq4\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.707622 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.707731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24db9578-040c-4720-830d-cfb5cda65976-logs\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.707854 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.707902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-config-data\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.708984 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24db9578-040c-4720-830d-cfb5cda65976-logs\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.714648 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.715309 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-config-data\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.715833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.729749 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngq4\" (UniqueName: \"kubernetes.io/projected/24db9578-040c-4720-830d-cfb5cda65976-kube-api-access-fngq4\") pod \"nova-metadata-0\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " pod="openstack/nova-metadata-0" Jan 27 19:02:14 crc kubenswrapper[4853]: I0127 19:02:14.849540 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:15 crc kubenswrapper[4853]: I0127 19:02:15.328501 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:15 crc kubenswrapper[4853]: W0127 19:02:15.329255 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24db9578_040c_4720_830d_cfb5cda65976.slice/crio-a6d6e5a4b357fae9891f6bd4d378862da882ab55835c83093e5613cb257c90f1 WatchSource:0}: Error finding container a6d6e5a4b357fae9891f6bd4d378862da882ab55835c83093e5613cb257c90f1: Status 404 returned error can't find the container with id a6d6e5a4b357fae9891f6bd4d378862da882ab55835c83093e5613cb257c90f1 Jan 27 19:02:15 crc kubenswrapper[4853]: I0127 19:02:15.410497 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24db9578-040c-4720-830d-cfb5cda65976","Type":"ContainerStarted","Data":"a6d6e5a4b357fae9891f6bd4d378862da882ab55835c83093e5613cb257c90f1"} Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.125179 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d4a916-994f-449f-9b81-82623f8f6583" path="/var/lib/kubelet/pods/20d4a916-994f-449f-9b81-82623f8f6583/volumes" Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.335262 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c78c8d4f6-bchzm" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.335396 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.428601 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24db9578-040c-4720-830d-cfb5cda65976","Type":"ContainerStarted","Data":"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28"} Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.428677 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24db9578-040c-4720-830d-cfb5cda65976","Type":"ContainerStarted","Data":"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6"} Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.434139 4853 generic.go:334] "Generic (PLEG): container finished" podID="1d74daf6-98b2-437c-8415-3053a40cedef" containerID="9188f3bd1cadb16ee740a9cb756693c3b2a35b30a5d57553b81c9033d291ad75" exitCode=0 Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.434227 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x47j6" event={"ID":"1d74daf6-98b2-437c-8415-3053a40cedef","Type":"ContainerDied","Data":"9188f3bd1cadb16ee740a9cb756693c3b2a35b30a5d57553b81c9033d291ad75"} Jan 27 19:02:16 crc kubenswrapper[4853]: I0127 19:02:16.453561 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.453532974 podStartE2EDuration="2.453532974s" podCreationTimestamp="2026-01-27 19:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:16.447792609 +0000 UTC m=+1178.910335492" watchObservedRunningTime="2026-01-27 19:02:16.453532974 +0000 UTC m=+1178.916075857" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.498847 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.499181 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.818720 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.827150 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.827189 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.859010 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.871281 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4lrt\" (UniqueName: \"kubernetes.io/projected/1d74daf6-98b2-437c-8415-3053a40cedef-kube-api-access-w4lrt\") pod \"1d74daf6-98b2-437c-8415-3053a40cedef\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.872518 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-config-data\") pod \"1d74daf6-98b2-437c-8415-3053a40cedef\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.872705 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-combined-ca-bundle\") pod \"1d74daf6-98b2-437c-8415-3053a40cedef\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.872884 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-scripts\") pod \"1d74daf6-98b2-437c-8415-3053a40cedef\" (UID: \"1d74daf6-98b2-437c-8415-3053a40cedef\") " Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.878456 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d74daf6-98b2-437c-8415-3053a40cedef-kube-api-access-w4lrt" (OuterVolumeSpecName: "kube-api-access-w4lrt") pod "1d74daf6-98b2-437c-8415-3053a40cedef" (UID: "1d74daf6-98b2-437c-8415-3053a40cedef"). InnerVolumeSpecName "kube-api-access-w4lrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.879444 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-scripts" (OuterVolumeSpecName: "scripts") pod "1d74daf6-98b2-437c-8415-3053a40cedef" (UID: "1d74daf6-98b2-437c-8415-3053a40cedef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.903870 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-config-data" (OuterVolumeSpecName: "config-data") pod "1d74daf6-98b2-437c-8415-3053a40cedef" (UID: "1d74daf6-98b2-437c-8415-3053a40cedef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.919363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d74daf6-98b2-437c-8415-3053a40cedef" (UID: "1d74daf6-98b2-437c-8415-3053a40cedef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.976964 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.977016 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4lrt\" (UniqueName: \"kubernetes.io/projected/1d74daf6-98b2-437c-8415-3053a40cedef-kube-api-access-w4lrt\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.977034 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:17 crc kubenswrapper[4853]: I0127 19:02:17.977049 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74daf6-98b2-437c-8415-3053a40cedef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.053292 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.160935 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pmx5z"] Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.161342 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerName="dnsmasq-dns" containerID="cri-o://08c4c8ae80259e015c6b73521eb30be7f6dd14be3ead040f4185ea34a3a0fb47" gracePeriod=10 Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.453989 4853 generic.go:334] "Generic (PLEG): container finished" podID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerID="08c4c8ae80259e015c6b73521eb30be7f6dd14be3ead040f4185ea34a3a0fb47" exitCode=0 Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.454049 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" event={"ID":"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57","Type":"ContainerDied","Data":"08c4c8ae80259e015c6b73521eb30be7f6dd14be3ead040f4185ea34a3a0fb47"} Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.459989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x47j6" event={"ID":"1d74daf6-98b2-437c-8415-3053a40cedef","Type":"ContainerDied","Data":"1ce41a98b659fce691b0578288c1448d855131ce264bc3bab46beff2cdbfcba4"} Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.460081 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce41a98b659fce691b0578288c1448d855131ce264bc3bab46beff2cdbfcba4" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.460045 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x47j6" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.491795 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.594444 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.594466 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.603303 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.664995 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.665527 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-log" containerID="cri-o://155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d" gracePeriod=30 Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.666069 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-api" containerID="cri-o://7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165" gracePeriod=30 Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.693806 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwwpt\" (UniqueName: \"kubernetes.io/projected/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-kube-api-access-xwwpt\") pod \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.693976 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-config\") pod \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.694070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-svc\") pod \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.694091 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-swift-storage-0\") pod \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.694231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-nb\") pod \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.694249 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-sb\") pod \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\" (UID: \"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57\") " Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.694737 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.695015 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-log" containerID="cri-o://a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6" gracePeriod=30 Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.695740 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-metadata" containerID="cri-o://213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28" gracePeriod=30 Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.718562 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-kube-api-access-xwwpt" (OuterVolumeSpecName: "kube-api-access-xwwpt") pod "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" (UID: "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57"). InnerVolumeSpecName "kube-api-access-xwwpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.773758 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" (UID: "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.775339 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" (UID: "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.789708 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" (UID: "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.796717 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.796751 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwwpt\" (UniqueName: \"kubernetes.io/projected/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-kube-api-access-xwwpt\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.796788 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.796804 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.798362 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-config" (OuterVolumeSpecName: "config") pod "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" (UID: "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.810855 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" (UID: "a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.906488 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:18 crc kubenswrapper[4853]: I0127 19:02:18.906528 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.305567 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.376882 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.449983 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-config-data\") pod \"24db9578-040c-4720-830d-cfb5cda65976\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.450044 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24db9578-040c-4720-830d-cfb5cda65976-logs\") pod \"24db9578-040c-4720-830d-cfb5cda65976\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.450252 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-nova-metadata-tls-certs\") pod \"24db9578-040c-4720-830d-cfb5cda65976\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.450288 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fngq4\" (UniqueName: \"kubernetes.io/projected/24db9578-040c-4720-830d-cfb5cda65976-kube-api-access-fngq4\") pod \"24db9578-040c-4720-830d-cfb5cda65976\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.450447 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-combined-ca-bundle\") pod \"24db9578-040c-4720-830d-cfb5cda65976\" (UID: \"24db9578-040c-4720-830d-cfb5cda65976\") " Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.450492 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24db9578-040c-4720-830d-cfb5cda65976-logs" (OuterVolumeSpecName: "logs") pod "24db9578-040c-4720-830d-cfb5cda65976" (UID: "24db9578-040c-4720-830d-cfb5cda65976"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.451020 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24db9578-040c-4720-830d-cfb5cda65976-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.461172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24db9578-040c-4720-830d-cfb5cda65976-kube-api-access-fngq4" (OuterVolumeSpecName: "kube-api-access-fngq4") pod "24db9578-040c-4720-830d-cfb5cda65976" (UID: "24db9578-040c-4720-830d-cfb5cda65976"). InnerVolumeSpecName "kube-api-access-fngq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.479365 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-config-data" (OuterVolumeSpecName: "config-data") pod "24db9578-040c-4720-830d-cfb5cda65976" (UID: "24db9578-040c-4720-830d-cfb5cda65976"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.495199 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" event={"ID":"a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57","Type":"ContainerDied","Data":"94123f69954076c48b8f9ee38a57a76be76ef68864cd4876747a58bba21ae374"} Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.495304 4853 scope.go:117] "RemoveContainer" containerID="08c4c8ae80259e015c6b73521eb30be7f6dd14be3ead040f4185ea34a3a0fb47" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.495553 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pmx5z" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.507877 4853 generic.go:334] "Generic (PLEG): container finished" podID="24db9578-040c-4720-830d-cfb5cda65976" containerID="213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28" exitCode=0 Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.507922 4853 generic.go:334] "Generic (PLEG): container finished" podID="24db9578-040c-4720-830d-cfb5cda65976" containerID="a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6" exitCode=143 Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.507979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24db9578-040c-4720-830d-cfb5cda65976","Type":"ContainerDied","Data":"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28"} Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.508017 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24db9578-040c-4720-830d-cfb5cda65976","Type":"ContainerDied","Data":"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6"} Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.508030 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24db9578-040c-4720-830d-cfb5cda65976","Type":"ContainerDied","Data":"a6d6e5a4b357fae9891f6bd4d378862da882ab55835c83093e5613cb257c90f1"} Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.508151 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.511334 4853 generic.go:334] "Generic (PLEG): container finished" podID="e89f7e94-9d6b-4279-b0b8-a91c47c904c8" containerID="e9081957ee7b8bfae736045631f2e30d73ff9f49beb4c6f1154fbcfcb2d3aaba" exitCode=0 Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.511449 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" event={"ID":"e89f7e94-9d6b-4279-b0b8-a91c47c904c8","Type":"ContainerDied","Data":"e9081957ee7b8bfae736045631f2e30d73ff9f49beb4c6f1154fbcfcb2d3aaba"} Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.515915 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24db9578-040c-4720-830d-cfb5cda65976" (UID: "24db9578-040c-4720-830d-cfb5cda65976"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.518650 4853 generic.go:334] "Generic (PLEG): container finished" podID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerID="155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d" exitCode=143 Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.519225 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fc6adfd-212b-4248-8c09-c993acc3459c","Type":"ContainerDied","Data":"155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d"} Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.536394 4853 scope.go:117] "RemoveContainer" containerID="90536b57141c0d90e63ef3353094cda0744cdf31e36462da054fefa4255395f7" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.553305 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fngq4\" (UniqueName: \"kubernetes.io/projected/24db9578-040c-4720-830d-cfb5cda65976-kube-api-access-fngq4\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.553348 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.553365 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.555830 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "24db9578-040c-4720-830d-cfb5cda65976" (UID: "24db9578-040c-4720-830d-cfb5cda65976"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.561490 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pmx5z"] Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.565111 4853 scope.go:117] "RemoveContainer" containerID="213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.574109 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pmx5z"] Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.594048 4853 scope.go:117] "RemoveContainer" containerID="a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.613612 4853 scope.go:117] "RemoveContainer" containerID="213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28" Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.614489 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28\": container with ID starting with 213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28 not found: ID does not exist" containerID="213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.614550 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28"} err="failed to get container status \"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28\": rpc error: code = NotFound desc = could not find container \"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28\": container with ID starting with 213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28 not found: ID does not exist" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.614590 4853 scope.go:117] "RemoveContainer" containerID="a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6" Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.615282 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6\": container with ID starting with a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6 not found: ID does not exist" containerID="a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.615335 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6"} err="failed to get container status \"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6\": rpc error: code = NotFound desc = could not find container \"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6\": container with ID starting with a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6 not found: ID does not exist" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.615367 4853 scope.go:117] "RemoveContainer" containerID="213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.615930 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28"} err="failed to get container status \"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28\": rpc error: code = NotFound desc = could not find container \"213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28\": container with ID starting with 213b83492cd0d3c6776d7bf89cdf62d2d287bfde2dd7d7b32dbd0c62188ebb28 not found: ID does not exist" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.615980 4853 scope.go:117] "RemoveContainer" containerID="a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.616606 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6"} err="failed to get container status \"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6\": rpc error: code = NotFound desc = could not find container \"a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6\": container with ID starting with a2595bc2d30438827d6f9461d14d738a37336497cb571ae33d22405af450d9a6 not found: ID does not exist" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.654810 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24db9578-040c-4720-830d-cfb5cda65976-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.851268 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.867711 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.876650 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.877073 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-metadata" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877091 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-metadata" Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.877114 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-log" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877135 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-log" Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.877147 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerName="init" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877154 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerName="init" Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.877162 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerName="dnsmasq-dns" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877168 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerName="dnsmasq-dns" Jan 27 19:02:19 crc kubenswrapper[4853]: E0127 19:02:19.877181 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d74daf6-98b2-437c-8415-3053a40cedef" containerName="nova-manage" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877186 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d74daf6-98b2-437c-8415-3053a40cedef" containerName="nova-manage" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877363 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d74daf6-98b2-437c-8415-3053a40cedef" containerName="nova-manage" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877385 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-log" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877397 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" containerName="dnsmasq-dns" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.877407 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="24db9578-040c-4720-830d-cfb5cda65976" containerName="nova-metadata-metadata" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.878328 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.887450 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.889431 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.893760 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.961710 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-config-data\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.961805 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-logs\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.961962 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.962208 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:19 crc kubenswrapper[4853]: I0127 19:02:19.962418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k25m\" (UniqueName: \"kubernetes.io/projected/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-kube-api-access-9k25m\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.064526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k25m\" (UniqueName: \"kubernetes.io/projected/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-kube-api-access-9k25m\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.064836 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-config-data\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.064866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-logs\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.064939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.065030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.066482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-logs\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.071716 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.073442 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-config-data\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.086777 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.090200 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k25m\" (UniqueName: \"kubernetes.io/projected/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-kube-api-access-9k25m\") pod \"nova-metadata-0\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.138661 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24db9578-040c-4720-830d-cfb5cda65976" path="/var/lib/kubelet/pods/24db9578-040c-4720-830d-cfb5cda65976/volumes" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.139457 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57" path="/var/lib/kubelet/pods/a521c7e1-86b3-4e0e-88f0-ac4ab1ce0b57/volumes" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.196202 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.526395 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="290cb89a-f259-4224-9e20-ad509d9e8d27" containerName="nova-scheduler-scheduler" containerID="cri-o://9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a" gracePeriod=30 Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.678550 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:02:20 crc kubenswrapper[4853]: I0127 19:02:20.926069 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.082973 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-config-data\") pod \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.083230 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-scripts\") pod \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.083303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frh4x\" (UniqueName: \"kubernetes.io/projected/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-kube-api-access-frh4x\") pod \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.083351 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-combined-ca-bundle\") pod \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\" (UID: \"e89f7e94-9d6b-4279-b0b8-a91c47c904c8\") " Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.088941 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-scripts" (OuterVolumeSpecName: "scripts") pod "e89f7e94-9d6b-4279-b0b8-a91c47c904c8" (UID: "e89f7e94-9d6b-4279-b0b8-a91c47c904c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.089481 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-kube-api-access-frh4x" (OuterVolumeSpecName: "kube-api-access-frh4x") pod "e89f7e94-9d6b-4279-b0b8-a91c47c904c8" (UID: "e89f7e94-9d6b-4279-b0b8-a91c47c904c8"). InnerVolumeSpecName "kube-api-access-frh4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.110293 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-config-data" (OuterVolumeSpecName: "config-data") pod "e89f7e94-9d6b-4279-b0b8-a91c47c904c8" (UID: "e89f7e94-9d6b-4279-b0b8-a91c47c904c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.110614 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89f7e94-9d6b-4279-b0b8-a91c47c904c8" (UID: "e89f7e94-9d6b-4279-b0b8-a91c47c904c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.186524 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.186577 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frh4x\" (UniqueName: \"kubernetes.io/projected/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-kube-api-access-frh4x\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.186593 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.186603 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f7e94-9d6b-4279-b0b8-a91c47c904c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.547959 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c6df56d-109f-4ab4-bb18-35b70eb1beaf","Type":"ContainerStarted","Data":"07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741"} Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.548028 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c6df56d-109f-4ab4-bb18-35b70eb1beaf","Type":"ContainerStarted","Data":"d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320"} Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.548041 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c6df56d-109f-4ab4-bb18-35b70eb1beaf","Type":"ContainerStarted","Data":"f95d7b7c09270bc7401455ed4ae2d9ebdbce97c43ce40475be911e0072a7d600"} Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.553436 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" event={"ID":"e89f7e94-9d6b-4279-b0b8-a91c47c904c8","Type":"ContainerDied","Data":"02d8359431aced28e445f5a07c7755bd3aeea70e6bb8f014b0359163c6d93a7d"} Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.553487 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d8359431aced28e445f5a07c7755bd3aeea70e6bb8f014b0359163c6d93a7d" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.553548 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8nwnw" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.580797 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.580767642 podStartE2EDuration="2.580767642s" podCreationTimestamp="2026-01-27 19:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:21.565792851 +0000 UTC m=+1184.028335744" watchObservedRunningTime="2026-01-27 19:02:21.580767642 +0000 UTC m=+1184.043310535" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.624197 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:02:21 crc kubenswrapper[4853]: E0127 19:02:21.624691 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89f7e94-9d6b-4279-b0b8-a91c47c904c8" containerName="nova-cell1-conductor-db-sync" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.624714 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89f7e94-9d6b-4279-b0b8-a91c47c904c8" containerName="nova-cell1-conductor-db-sync" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.624975 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89f7e94-9d6b-4279-b0b8-a91c47c904c8" containerName="nova-cell1-conductor-db-sync" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.625793 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.629083 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.644158 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.796646 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c3933-7f75-4c32-95e2-bac827abcb76-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.797075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9c3933-7f75-4c32-95e2-bac827abcb76-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.797150 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwmw\" (UniqueName: \"kubernetes.io/projected/0f9c3933-7f75-4c32-95e2-bac827abcb76-kube-api-access-wgwmw\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.899375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c3933-7f75-4c32-95e2-bac827abcb76-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.899567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9c3933-7f75-4c32-95e2-bac827abcb76-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.899630 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgwmw\" (UniqueName: \"kubernetes.io/projected/0f9c3933-7f75-4c32-95e2-bac827abcb76-kube-api-access-wgwmw\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.904912 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c3933-7f75-4c32-95e2-bac827abcb76-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.904958 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9c3933-7f75-4c32-95e2-bac827abcb76-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.916279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgwmw\" (UniqueName: \"kubernetes.io/projected/0f9c3933-7f75-4c32-95e2-bac827abcb76-kube-api-access-wgwmw\") pod \"nova-cell1-conductor-0\" (UID: \"0f9c3933-7f75-4c32-95e2-bac827abcb76\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:21 crc kubenswrapper[4853]: I0127 19:02:21.958571 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.377699 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.405050 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.510677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-tls-certs\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.510766 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-secret-key\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.510845 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfxrw\" (UniqueName: \"kubernetes.io/projected/28f114cd-daca-4c71-9ecd-64b8008ddbef-kube-api-access-lfxrw\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.510942 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-scripts\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.510962 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-combined-ca-bundle\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.511029 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-config-data\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.511070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f114cd-daca-4c71-9ecd-64b8008ddbef-logs\") pod \"28f114cd-daca-4c71-9ecd-64b8008ddbef\" (UID: \"28f114cd-daca-4c71-9ecd-64b8008ddbef\") " Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.512851 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f114cd-daca-4c71-9ecd-64b8008ddbef-logs" (OuterVolumeSpecName: "logs") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.515335 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.515876 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f114cd-daca-4c71-9ecd-64b8008ddbef-kube-api-access-lfxrw" (OuterVolumeSpecName: "kube-api-access-lfxrw") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "kube-api-access-lfxrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.542107 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-config-data" (OuterVolumeSpecName: "config-data") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.544109 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.547066 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-scripts" (OuterVolumeSpecName: "scripts") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.561806 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0f9c3933-7f75-4c32-95e2-bac827abcb76","Type":"ContainerStarted","Data":"43e68c1d3ec759bf8b75e46e07dddcb4e06c433aff32c3d3d0b5334fc941c632"} Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.563777 4853 generic.go:334] "Generic (PLEG): container finished" podID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerID="bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4" exitCode=137 Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.563863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerDied","Data":"bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4"} Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.563896 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c78c8d4f6-bchzm" event={"ID":"28f114cd-daca-4c71-9ecd-64b8008ddbef","Type":"ContainerDied","Data":"3de4ac5d0457c3a7e26b839a8d6399efd45f6b550477e5205068db8eeadb4c33"} Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.563919 4853 scope.go:117] "RemoveContainer" containerID="3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.564281 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c78c8d4f6-bchzm" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.576484 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "28f114cd-daca-4c71-9ecd-64b8008ddbef" (UID: "28f114cd-daca-4c71-9ecd-64b8008ddbef"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.612922 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.612961 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f114cd-daca-4c71-9ecd-64b8008ddbef-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.612971 4853 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.612981 4853 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.612991 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfxrw\" (UniqueName: \"kubernetes.io/projected/28f114cd-daca-4c71-9ecd-64b8008ddbef-kube-api-access-lfxrw\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.613004 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f114cd-daca-4c71-9ecd-64b8008ddbef-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.613012 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f114cd-daca-4c71-9ecd-64b8008ddbef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.750565 4853 scope.go:117] "RemoveContainer" containerID="bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.808632 4853 scope.go:117] "RemoveContainer" containerID="3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2" Jan 27 19:02:22 crc kubenswrapper[4853]: E0127 19:02:22.809106 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2\": container with ID starting with 3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2 not found: ID does not exist" containerID="3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.809157 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2"} err="failed to get container status \"3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2\": rpc error: code = NotFound desc = could not find container \"3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2\": container with ID starting with 3f45ef7c9031b5bacc230d61643ec9c587047e9e52878a4b6f37d775f2bdb5a2 not found: ID does not exist" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.809187 4853 scope.go:117] "RemoveContainer" containerID="bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4" Jan 27 19:02:22 crc kubenswrapper[4853]: E0127 19:02:22.809461 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4\": container with ID starting with bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4 not found: ID does not exist" containerID="bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.809492 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4"} err="failed to get container status \"bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4\": rpc error: code = NotFound desc = could not find container \"bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4\": container with ID starting with bfed2f5cf0fb64f3a0f44767fd92967fbaeb07e2af02fbcb8d268a7117ca39e4 not found: ID does not exist" Jan 27 19:02:22 crc kubenswrapper[4853]: E0127 19:02:22.828756 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:02:22 crc kubenswrapper[4853]: E0127 19:02:22.829940 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:02:22 crc kubenswrapper[4853]: E0127 19:02:22.830984 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:02:22 crc kubenswrapper[4853]: E0127 19:02:22.831017 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="290cb89a-f259-4224-9e20-ad509d9e8d27" containerName="nova-scheduler-scheduler" Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.905760 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c78c8d4f6-bchzm"] Jan 27 19:02:22 crc kubenswrapper[4853]: I0127 19:02:22.916468 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c78c8d4f6-bchzm"] Jan 27 19:02:23 crc kubenswrapper[4853]: I0127 19:02:23.590282 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0f9c3933-7f75-4c32-95e2-bac827abcb76","Type":"ContainerStarted","Data":"87e3cb5d18d3e6236ff438b04cc527f83204444f936889ef9f37d4fe56f6b91c"} Jan 27 19:02:23 crc kubenswrapper[4853]: I0127 19:02:23.591743 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:23 crc kubenswrapper[4853]: I0127 19:02:23.612564 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.612541534 podStartE2EDuration="2.612541534s" podCreationTimestamp="2026-01-27 19:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:23.60373215 +0000 UTC m=+1186.066275033" watchObservedRunningTime="2026-01-27 19:02:23.612541534 +0000 UTC m=+1186.075084417" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.130465 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" path="/var/lib/kubelet/pods/28f114cd-daca-4c71-9ecd-64b8008ddbef/volumes" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.457668 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.549054 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-config-data\") pod \"4fc6adfd-212b-4248-8c09-c993acc3459c\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.549238 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc6adfd-212b-4248-8c09-c993acc3459c-logs\") pod \"4fc6adfd-212b-4248-8c09-c993acc3459c\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.549384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-combined-ca-bundle\") pod \"4fc6adfd-212b-4248-8c09-c993acc3459c\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.549421 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzx4z\" (UniqueName: \"kubernetes.io/projected/4fc6adfd-212b-4248-8c09-c993acc3459c-kube-api-access-nzx4z\") pod \"4fc6adfd-212b-4248-8c09-c993acc3459c\" (UID: \"4fc6adfd-212b-4248-8c09-c993acc3459c\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.549852 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc6adfd-212b-4248-8c09-c993acc3459c-logs" (OuterVolumeSpecName: "logs") pod "4fc6adfd-212b-4248-8c09-c993acc3459c" (UID: "4fc6adfd-212b-4248-8c09-c993acc3459c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.549984 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc6adfd-212b-4248-8c09-c993acc3459c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.555638 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc6adfd-212b-4248-8c09-c993acc3459c-kube-api-access-nzx4z" (OuterVolumeSpecName: "kube-api-access-nzx4z") pod "4fc6adfd-212b-4248-8c09-c993acc3459c" (UID: "4fc6adfd-212b-4248-8c09-c993acc3459c"). InnerVolumeSpecName "kube-api-access-nzx4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.591072 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fc6adfd-212b-4248-8c09-c993acc3459c" (UID: "4fc6adfd-212b-4248-8c09-c993acc3459c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.595138 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-config-data" (OuterVolumeSpecName: "config-data") pod "4fc6adfd-212b-4248-8c09-c993acc3459c" (UID: "4fc6adfd-212b-4248-8c09-c993acc3459c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.605429 4853 generic.go:334] "Generic (PLEG): container finished" podID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerID="7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165" exitCode=0 Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.605739 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.605633 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fc6adfd-212b-4248-8c09-c993acc3459c","Type":"ContainerDied","Data":"7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165"} Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.606164 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fc6adfd-212b-4248-8c09-c993acc3459c","Type":"ContainerDied","Data":"24ff2a8cee9f412c42bd2622e80ad7439d07c4aa223f764d6403486c3ffb2a6d"} Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.606724 4853 scope.go:117] "RemoveContainer" containerID="7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.612989 4853 generic.go:334] "Generic (PLEG): container finished" podID="290cb89a-f259-4224-9e20-ad509d9e8d27" containerID="9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a" exitCode=0 Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.613109 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"290cb89a-f259-4224-9e20-ad509d9e8d27","Type":"ContainerDied","Data":"9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a"} Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.651416 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.651575 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzx4z\" (UniqueName: \"kubernetes.io/projected/4fc6adfd-212b-4248-8c09-c993acc3459c-kube-api-access-nzx4z\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.651638 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc6adfd-212b-4248-8c09-c993acc3459c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.677418 4853 scope.go:117] "RemoveContainer" containerID="155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.681624 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.701786 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.702252 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.715453 4853 scope.go:117] "RemoveContainer" containerID="7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.717737 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165\": container with ID starting with 7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165 not found: ID does not exist" containerID="7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.717792 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165"} err="failed to get container status \"7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165\": rpc error: code = NotFound desc = could not find container \"7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165\": container with ID starting with 7cf078e7342393db69b724da09465331ca55638e0f2cfa15c30b98ec4bef0165 not found: ID does not exist" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.717834 4853 scope.go:117] "RemoveContainer" containerID="155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.718970 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d\": container with ID starting with 155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d not found: ID does not exist" containerID="155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.719070 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d"} err="failed to get container status \"155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d\": rpc error: code = NotFound desc = could not find container \"155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d\": container with ID starting with 155d9b191a3514f326f7317afe0bdc650aad913b215a5e534f5887d1dc745b3d not found: ID does not exist" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.721537 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.743823 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.744296 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-api" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.744314 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-api" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.744334 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.744341 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.744368 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-log" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.744374 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-log" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.744407 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290cb89a-f259-4224-9e20-ad509d9e8d27" containerName="nova-scheduler-scheduler" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.744416 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="290cb89a-f259-4224-9e20-ad509d9e8d27" containerName="nova-scheduler-scheduler" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.744424 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.744429 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" Jan 27 19:02:24 crc kubenswrapper[4853]: E0127 19:02:24.747485 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon-log" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.747553 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon-log" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.748048 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.748075 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon-log" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.748092 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-log" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.748104 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="290cb89a-f259-4224-9e20-ad509d9e8d27" containerName="nova-scheduler-scheduler" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.748115 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" containerName="nova-api-api" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.748487 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f114cd-daca-4c71-9ecd-64b8008ddbef" containerName="horizon" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.749410 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.752705 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.756521 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-config-data\") pod \"290cb89a-f259-4224-9e20-ad509d9e8d27\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.756602 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9bpn\" (UniqueName: \"kubernetes.io/projected/290cb89a-f259-4224-9e20-ad509d9e8d27-kube-api-access-z9bpn\") pod \"290cb89a-f259-4224-9e20-ad509d9e8d27\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.756782 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-combined-ca-bundle\") pod \"290cb89a-f259-4224-9e20-ad509d9e8d27\" (UID: \"290cb89a-f259-4224-9e20-ad509d9e8d27\") " Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.758421 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.778635 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290cb89a-f259-4224-9e20-ad509d9e8d27-kube-api-access-z9bpn" (OuterVolumeSpecName: "kube-api-access-z9bpn") pod "290cb89a-f259-4224-9e20-ad509d9e8d27" (UID: "290cb89a-f259-4224-9e20-ad509d9e8d27"). InnerVolumeSpecName "kube-api-access-z9bpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.793813 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-config-data" (OuterVolumeSpecName: "config-data") pod "290cb89a-f259-4224-9e20-ad509d9e8d27" (UID: "290cb89a-f259-4224-9e20-ad509d9e8d27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.802937 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "290cb89a-f259-4224-9e20-ad509d9e8d27" (UID: "290cb89a-f259-4224-9e20-ad509d9e8d27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.859442 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.859499 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/065ad886-043b-4744-8d1f-ba3895202feb-logs\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.859671 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-config-data\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.859914 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhsv4\" (UniqueName: \"kubernetes.io/projected/065ad886-043b-4744-8d1f-ba3895202feb-kube-api-access-jhsv4\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.860008 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.860030 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9bpn\" (UniqueName: \"kubernetes.io/projected/290cb89a-f259-4224-9e20-ad509d9e8d27-kube-api-access-z9bpn\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.860044 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290cb89a-f259-4224-9e20-ad509d9e8d27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.961922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhsv4\" (UniqueName: \"kubernetes.io/projected/065ad886-043b-4744-8d1f-ba3895202feb-kube-api-access-jhsv4\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.962306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.962390 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/065ad886-043b-4744-8d1f-ba3895202feb-logs\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.962519 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-config-data\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.962797 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/065ad886-043b-4744-8d1f-ba3895202feb-logs\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.966040 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-config-data\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.966478 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:24 crc kubenswrapper[4853]: I0127 19:02:24.981404 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhsv4\" (UniqueName: \"kubernetes.io/projected/065ad886-043b-4744-8d1f-ba3895202feb-kube-api-access-jhsv4\") pod \"nova-api-0\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " pod="openstack/nova-api-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.068258 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.196316 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.196625 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.519766 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.628216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"290cb89a-f259-4224-9e20-ad509d9e8d27","Type":"ContainerDied","Data":"2da868497d4aa13f64fe746859c5a014c6be7a1db21db247fce63a1e471dc868"} Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.628584 4853 scope.go:117] "RemoveContainer" containerID="9d9812aa37084d224881626bbbf3116cecd2aac4a7ef06044967b21194ffdb8a" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.628343 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.634130 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"065ad886-043b-4744-8d1f-ba3895202feb","Type":"ContainerStarted","Data":"68bed728633080746d1ed974c315990ca8d81d452ee4e7f03af318e53a963450"} Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.676994 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.694996 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.705837 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.707359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.712584 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.715480 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.779635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.779732 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-config-data\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.779858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxp5\" (UniqueName: \"kubernetes.io/projected/f1a81537-2e83-406a-9194-7a6362a7e874-kube-api-access-xlxp5\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.881717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.881776 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-config-data\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.881836 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxp5\" (UniqueName: \"kubernetes.io/projected/f1a81537-2e83-406a-9194-7a6362a7e874-kube-api-access-xlxp5\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.886106 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.886220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-config-data\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:25 crc kubenswrapper[4853]: I0127 19:02:25.897956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxp5\" (UniqueName: \"kubernetes.io/projected/f1a81537-2e83-406a-9194-7a6362a7e874-kube-api-access-xlxp5\") pod \"nova-scheduler-0\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " pod="openstack/nova-scheduler-0" Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.034609 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.142069 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="290cb89a-f259-4224-9e20-ad509d9e8d27" path="/var/lib/kubelet/pods/290cb89a-f259-4224-9e20-ad509d9e8d27/volumes" Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.147448 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc6adfd-212b-4248-8c09-c993acc3459c" path="/var/lib/kubelet/pods/4fc6adfd-212b-4248-8c09-c993acc3459c/volumes" Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.476604 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.653647 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"065ad886-043b-4744-8d1f-ba3895202feb","Type":"ContainerStarted","Data":"a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6"} Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.653693 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"065ad886-043b-4744-8d1f-ba3895202feb","Type":"ContainerStarted","Data":"15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de"} Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.655140 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1a81537-2e83-406a-9194-7a6362a7e874","Type":"ContainerStarted","Data":"3062efe491f175e5b1b2e0419d827e5066d96341200e1eaab57c98cad4fba26c"} Jan 27 19:02:26 crc kubenswrapper[4853]: I0127 19:02:26.681101 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.681085455 podStartE2EDuration="2.681085455s" podCreationTimestamp="2026-01-27 19:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:26.677252364 +0000 UTC m=+1189.139795257" watchObservedRunningTime="2026-01-27 19:02:26.681085455 +0000 UTC m=+1189.143628348" Jan 27 19:02:27 crc kubenswrapper[4853]: I0127 19:02:27.681445 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1a81537-2e83-406a-9194-7a6362a7e874","Type":"ContainerStarted","Data":"b1d92aa89f2e6fa8aa143e761119c0f4688d2e4e53cd08fca0796483700f42dc"} Jan 27 19:02:27 crc kubenswrapper[4853]: I0127 19:02:27.738405 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.738373133 podStartE2EDuration="2.738373133s" podCreationTimestamp="2026-01-27 19:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:27.723424842 +0000 UTC m=+1190.185967735" watchObservedRunningTime="2026-01-27 19:02:27.738373133 +0000 UTC m=+1190.200916016" Jan 27 19:02:28 crc kubenswrapper[4853]: I0127 19:02:28.454201 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:02:28 crc kubenswrapper[4853]: I0127 19:02:28.454708 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="eff8efe8-39b3-4aa6-af17-f40690d3d639" containerName="kube-state-metrics" containerID="cri-o://e3254e87d7b580952e660ddcc82a0b8104ea01ae613b0f53522b2154e14c56c9" gracePeriod=30 Jan 27 19:02:28 crc kubenswrapper[4853]: I0127 19:02:28.718706 4853 generic.go:334] "Generic (PLEG): container finished" podID="eff8efe8-39b3-4aa6-af17-f40690d3d639" containerID="e3254e87d7b580952e660ddcc82a0b8104ea01ae613b0f53522b2154e14c56c9" exitCode=2 Jan 27 19:02:28 crc kubenswrapper[4853]: I0127 19:02:28.718920 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eff8efe8-39b3-4aa6-af17-f40690d3d639","Type":"ContainerDied","Data":"e3254e87d7b580952e660ddcc82a0b8104ea01ae613b0f53522b2154e14c56c9"} Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.027892 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.149763 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfj6p\" (UniqueName: \"kubernetes.io/projected/eff8efe8-39b3-4aa6-af17-f40690d3d639-kube-api-access-vfj6p\") pod \"eff8efe8-39b3-4aa6-af17-f40690d3d639\" (UID: \"eff8efe8-39b3-4aa6-af17-f40690d3d639\") " Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.156671 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff8efe8-39b3-4aa6-af17-f40690d3d639-kube-api-access-vfj6p" (OuterVolumeSpecName: "kube-api-access-vfj6p") pod "eff8efe8-39b3-4aa6-af17-f40690d3d639" (UID: "eff8efe8-39b3-4aa6-af17-f40690d3d639"). InnerVolumeSpecName "kube-api-access-vfj6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.252009 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfj6p\" (UniqueName: \"kubernetes.io/projected/eff8efe8-39b3-4aa6-af17-f40690d3d639-kube-api-access-vfj6p\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.728276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eff8efe8-39b3-4aa6-af17-f40690d3d639","Type":"ContainerDied","Data":"019bc8b3535a1722c74c7318bdb972b30ddb50384af1e501a7efaee1bd94ed13"} Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.728637 4853 scope.go:117] "RemoveContainer" containerID="e3254e87d7b580952e660ddcc82a0b8104ea01ae613b0f53522b2154e14c56c9" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.728324 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.761640 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.774523 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.785330 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:02:29 crc kubenswrapper[4853]: E0127 19:02:29.785789 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff8efe8-39b3-4aa6-af17-f40690d3d639" containerName="kube-state-metrics" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.785807 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff8efe8-39b3-4aa6-af17-f40690d3d639" containerName="kube-state-metrics" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.785984 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff8efe8-39b3-4aa6-af17-f40690d3d639" containerName="kube-state-metrics" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.787520 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.789435 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.789552 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.792076 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.861621 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.861670 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmbc\" (UniqueName: \"kubernetes.io/projected/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-api-access-vvmbc\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.861693 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.861729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.963150 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.963202 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvmbc\" (UniqueName: \"kubernetes.io/projected/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-api-access-vvmbc\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.963228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.963273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.968056 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.968351 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.980892 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:29 crc kubenswrapper[4853]: I0127 19:02:29.994934 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvmbc\" (UniqueName: \"kubernetes.io/projected/c6327a68-b665-423b-85ed-3b1a4d3ffaa2-kube-api-access-vvmbc\") pod \"kube-state-metrics-0\" (UID: \"c6327a68-b665-423b-85ed-3b1a4d3ffaa2\") " pod="openstack/kube-state-metrics-0" Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.103076 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.131280 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff8efe8-39b3-4aa6-af17-f40690d3d639" path="/var/lib/kubelet/pods/eff8efe8-39b3-4aa6-af17-f40690d3d639/volumes" Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.196790 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.196845 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.557056 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.557743 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-central-agent" containerID="cri-o://c919b11cc6629b38bf38a52fd0c91c5843bbf741b75ab13e9784c6e3ac444583" gracePeriod=30 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.557807 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-notification-agent" containerID="cri-o://3190f5c36cdfe9dd1569e22942c6238eee369c5f38fc7eef00747c380ae0ccc4" gracePeriod=30 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.557882 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="sg-core" containerID="cri-o://b8e9c370edb9950021ccedd6bebe0338d3d3561c1d9285d140df4dffa0f84b10" gracePeriod=30 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.557754 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="proxy-httpd" containerID="cri-o://194d31d04ddaa32ae0b25a8738de7776f0be0a85c7df225f305db8b62639b188" gracePeriod=30 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.601358 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:02:30 crc kubenswrapper[4853]: W0127 19:02:30.612331 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6327a68_b665_423b_85ed_3b1a4d3ffaa2.slice/crio-ab4a3d9ba779deeffc456881110934f1983ff455203039997913399e530acaf2 WatchSource:0}: Error finding container ab4a3d9ba779deeffc456881110934f1983ff455203039997913399e530acaf2: Status 404 returned error can't find the container with id ab4a3d9ba779deeffc456881110934f1983ff455203039997913399e530acaf2 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.737938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c6327a68-b665-423b-85ed-3b1a4d3ffaa2","Type":"ContainerStarted","Data":"ab4a3d9ba779deeffc456881110934f1983ff455203039997913399e530acaf2"} Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.742552 4853 generic.go:334] "Generic (PLEG): container finished" podID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerID="194d31d04ddaa32ae0b25a8738de7776f0be0a85c7df225f305db8b62639b188" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.742590 4853 generic.go:334] "Generic (PLEG): container finished" podID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerID="b8e9c370edb9950021ccedd6bebe0338d3d3561c1d9285d140df4dffa0f84b10" exitCode=2 Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.742609 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerDied","Data":"194d31d04ddaa32ae0b25a8738de7776f0be0a85c7df225f305db8b62639b188"} Jan 27 19:02:30 crc kubenswrapper[4853]: I0127 19:02:30.742638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerDied","Data":"b8e9c370edb9950021ccedd6bebe0338d3d3561c1d9285d140df4dffa0f84b10"} Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.034805 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.207392 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.207442 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.759081 4853 generic.go:334] "Generic (PLEG): container finished" podID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerID="c919b11cc6629b38bf38a52fd0c91c5843bbf741b75ab13e9784c6e3ac444583" exitCode=0 Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.759164 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerDied","Data":"c919b11cc6629b38bf38a52fd0c91c5843bbf741b75ab13e9784c6e3ac444583"} Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.764007 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c6327a68-b665-423b-85ed-3b1a4d3ffaa2","Type":"ContainerStarted","Data":"b831a149e6b4838f57fc8967b740243891bbb5c9e01511993d78d2020a131851"} Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.764444 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 19:02:31 crc kubenswrapper[4853]: I0127 19:02:31.993371 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 19:02:32 crc kubenswrapper[4853]: I0127 19:02:32.022339 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.619612344 podStartE2EDuration="3.022108339s" podCreationTimestamp="2026-01-27 19:02:29 +0000 UTC" firstStartedPulling="2026-01-27 19:02:30.615038394 +0000 UTC m=+1193.077581267" lastFinishedPulling="2026-01-27 19:02:31.017534379 +0000 UTC m=+1193.480077262" observedRunningTime="2026-01-27 19:02:31.788623013 +0000 UTC m=+1194.251165896" watchObservedRunningTime="2026-01-27 19:02:32.022108339 +0000 UTC m=+1194.484651222" Jan 27 19:02:33 crc kubenswrapper[4853]: I0127 19:02:33.788594 4853 generic.go:334] "Generic (PLEG): container finished" podID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerID="3190f5c36cdfe9dd1569e22942c6238eee369c5f38fc7eef00747c380ae0ccc4" exitCode=0 Jan 27 19:02:33 crc kubenswrapper[4853]: I0127 19:02:33.788876 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerDied","Data":"3190f5c36cdfe9dd1569e22942c6238eee369c5f38fc7eef00747c380ae0ccc4"} Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.008100 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.141555 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-config-data\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.141658 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-log-httpd\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.141703 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-scripts\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.141760 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-combined-ca-bundle\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.141834 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-run-httpd\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.141912 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbgr8\" (UniqueName: \"kubernetes.io/projected/65b5c226-54b4-4d6c-a8fa-80cd157faf69-kube-api-access-cbgr8\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.142035 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-sg-core-conf-yaml\") pod \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\" (UID: \"65b5c226-54b4-4d6c-a8fa-80cd157faf69\") " Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.142460 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.143015 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.143542 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.148869 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b5c226-54b4-4d6c-a8fa-80cd157faf69-kube-api-access-cbgr8" (OuterVolumeSpecName: "kube-api-access-cbgr8") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "kube-api-access-cbgr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.157498 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-scripts" (OuterVolumeSpecName: "scripts") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.180185 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.231634 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.246272 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbgr8\" (UniqueName: \"kubernetes.io/projected/65b5c226-54b4-4d6c-a8fa-80cd157faf69-kube-api-access-cbgr8\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.246323 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.246339 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.246354 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.246367 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b5c226-54b4-4d6c-a8fa-80cd157faf69-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.275032 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-config-data" (OuterVolumeSpecName: "config-data") pod "65b5c226-54b4-4d6c-a8fa-80cd157faf69" (UID: "65b5c226-54b4-4d6c-a8fa-80cd157faf69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.348793 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5c226-54b4-4d6c-a8fa-80cd157faf69-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.805299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b5c226-54b4-4d6c-a8fa-80cd157faf69","Type":"ContainerDied","Data":"2dba3f88018de13c4a3cbafd4cd0fba5e8830745645feed8da98833ab6e892da"} Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.805370 4853 scope.go:117] "RemoveContainer" containerID="194d31d04ddaa32ae0b25a8738de7776f0be0a85c7df225f305db8b62639b188" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.805409 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.855914 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.858514 4853 scope.go:117] "RemoveContainer" containerID="b8e9c370edb9950021ccedd6bebe0338d3d3561c1d9285d140df4dffa0f84b10" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.867509 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.893402 4853 scope.go:117] "RemoveContainer" containerID="3190f5c36cdfe9dd1569e22942c6238eee369c5f38fc7eef00747c380ae0ccc4" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898171 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:34 crc kubenswrapper[4853]: E0127 19:02:34.898611 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-central-agent" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898634 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-central-agent" Jan 27 19:02:34 crc kubenswrapper[4853]: E0127 19:02:34.898645 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="sg-core" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898653 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="sg-core" Jan 27 19:02:34 crc kubenswrapper[4853]: E0127 19:02:34.898661 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-notification-agent" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898667 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-notification-agent" Jan 27 19:02:34 crc kubenswrapper[4853]: E0127 19:02:34.898691 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="proxy-httpd" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898698 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="proxy-httpd" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898862 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-central-agent" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898876 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="ceilometer-notification-agent" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898891 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="sg-core" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.898900 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" containerName="proxy-httpd" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.900603 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.904504 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.904944 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.905046 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.907909 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.929992 4853 scope.go:117] "RemoveContainer" containerID="c919b11cc6629b38bf38a52fd0c91c5843bbf741b75ab13e9784c6e3ac444583" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.960653 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961022 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphn2\" (UniqueName: \"kubernetes.io/projected/9827c9fd-b99a-4921-9680-9d34d2d1cd00-kube-api-access-pphn2\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961334 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-config-data\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961493 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-scripts\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-log-httpd\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961724 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-run-httpd\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:34 crc kubenswrapper[4853]: I0127 19:02:34.961823 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-config-data\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-scripts\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-log-httpd\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064365 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-run-httpd\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064388 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064435 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pphn2\" (UniqueName: \"kubernetes.io/projected/9827c9fd-b99a-4921-9680-9d34d2d1cd00-kube-api-access-pphn2\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064452 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.064498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.066036 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-run-httpd\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.066406 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-log-httpd\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.068729 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.069037 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.069093 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-scripts\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.069256 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.069299 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.072776 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-config-data\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.082181 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.089944 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pphn2\" (UniqueName: \"kubernetes.io/projected/9827c9fd-b99a-4921-9680-9d34d2d1cd00-kube-api-access-pphn2\") pod \"ceilometer-0\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.236901 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.766049 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:35 crc kubenswrapper[4853]: I0127 19:02:35.819020 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerStarted","Data":"8286cbe692dd93a1b0bf9ba4dc4af0ae3dfc08de58fdaa78b10bf987d9655bdc"} Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.034936 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.066748 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.123517 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b5c226-54b4-4d6c-a8fa-80cd157faf69" path="/var/lib/kubelet/pods/65b5c226-54b4-4d6c-a8fa-80cd157faf69/volumes" Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.151332 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.151657 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.831836 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerStarted","Data":"e8a275a4ad0eb7c9886fae3010e96c0fab11926f8af3a276cc2e63014318e505"} Jan 27 19:02:36 crc kubenswrapper[4853]: I0127 19:02:36.880378 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 19:02:37 crc kubenswrapper[4853]: I0127 19:02:37.842087 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerStarted","Data":"1b3cbd52f811282d0fd7afa0af60ba1c4d5f9e925ae116fcd608add0467516bd"} Jan 27 19:02:38 crc kubenswrapper[4853]: I0127 19:02:38.853045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerStarted","Data":"31ee881b974de2fd68078c6099f6c7d7d308596e0ddb00c35ccb1b19afd2ddb5"} Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.131646 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.214959 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.234101 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.238509 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.873776 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerStarted","Data":"16a01a5cde5230cf61baabf5562f2f8b761ca80f16ea6827bb8a6d9efc6f2997"} Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.881678 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:02:40 crc kubenswrapper[4853]: I0127 19:02:40.910058 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.050459438 podStartE2EDuration="6.910029017s" podCreationTimestamp="2026-01-27 19:02:34 +0000 UTC" firstStartedPulling="2026-01-27 19:02:35.798195172 +0000 UTC m=+1198.260738055" lastFinishedPulling="2026-01-27 19:02:39.657764751 +0000 UTC m=+1202.120307634" observedRunningTime="2026-01-27 19:02:40.89245059 +0000 UTC m=+1203.354993513" watchObservedRunningTime="2026-01-27 19:02:40.910029017 +0000 UTC m=+1203.372571910" Jan 27 19:02:41 crc kubenswrapper[4853]: I0127 19:02:41.882689 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:02:42 crc kubenswrapper[4853]: E0127 19:02:42.596998 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2db297_94b3_4341_b706_b8f47a596ff9.slice/crio-conmon-d55e691a9d4b50ee4d8ff377e74e9632490ff1000b821a8626b34afdfe60482c.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:02:42 crc kubenswrapper[4853]: I0127 19:02:42.892084 4853 generic.go:334] "Generic (PLEG): container finished" podID="9b2db297-94b3-4341-b706-b8f47a596ff9" containerID="d55e691a9d4b50ee4d8ff377e74e9632490ff1000b821a8626b34afdfe60482c" exitCode=137 Jan 27 19:02:42 crc kubenswrapper[4853]: I0127 19:02:42.892194 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9b2db297-94b3-4341-b706-b8f47a596ff9","Type":"ContainerDied","Data":"d55e691a9d4b50ee4d8ff377e74e9632490ff1000b821a8626b34afdfe60482c"} Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.245534 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.366396 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtc4n\" (UniqueName: \"kubernetes.io/projected/9b2db297-94b3-4341-b706-b8f47a596ff9-kube-api-access-wtc4n\") pod \"9b2db297-94b3-4341-b706-b8f47a596ff9\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.366481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-combined-ca-bundle\") pod \"9b2db297-94b3-4341-b706-b8f47a596ff9\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.366541 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-config-data\") pod \"9b2db297-94b3-4341-b706-b8f47a596ff9\" (UID: \"9b2db297-94b3-4341-b706-b8f47a596ff9\") " Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.388344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2db297-94b3-4341-b706-b8f47a596ff9-kube-api-access-wtc4n" (OuterVolumeSpecName: "kube-api-access-wtc4n") pod "9b2db297-94b3-4341-b706-b8f47a596ff9" (UID: "9b2db297-94b3-4341-b706-b8f47a596ff9"). InnerVolumeSpecName "kube-api-access-wtc4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.400427 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-config-data" (OuterVolumeSpecName: "config-data") pod "9b2db297-94b3-4341-b706-b8f47a596ff9" (UID: "9b2db297-94b3-4341-b706-b8f47a596ff9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.405862 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b2db297-94b3-4341-b706-b8f47a596ff9" (UID: "9b2db297-94b3-4341-b706-b8f47a596ff9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.468450 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtc4n\" (UniqueName: \"kubernetes.io/projected/9b2db297-94b3-4341-b706-b8f47a596ff9-kube-api-access-wtc4n\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.468494 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.468508 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2db297-94b3-4341-b706-b8f47a596ff9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.902477 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9b2db297-94b3-4341-b706-b8f47a596ff9","Type":"ContainerDied","Data":"41b3c19fd7904a783c4b2d983a812579849f71e9034d082809c6a7fc8bc3c620"} Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.902556 4853 scope.go:117] "RemoveContainer" containerID="d55e691a9d4b50ee4d8ff377e74e9632490ff1000b821a8626b34afdfe60482c" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.902631 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.965201 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:43 crc kubenswrapper[4853]: I0127 19:02:43.988023 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.006198 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:44 crc kubenswrapper[4853]: E0127 19:02:44.006721 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2db297-94b3-4341-b706-b8f47a596ff9" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.006741 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2db297-94b3-4341-b706-b8f47a596ff9" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.006945 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2db297-94b3-4341-b706-b8f47a596ff9" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.007864 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.011191 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.011561 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.012412 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.019779 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.080772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmsl\" (UniqueName: \"kubernetes.io/projected/eb83c723-2f1b-419a-bd58-51e56534cb23-kube-api-access-mjmsl\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.080858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.080896 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.080949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.080995 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.128845 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2db297-94b3-4341-b706-b8f47a596ff9" path="/var/lib/kubelet/pods/9b2db297-94b3-4341-b706-b8f47a596ff9/volumes" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.182665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmsl\" (UniqueName: \"kubernetes.io/projected/eb83c723-2f1b-419a-bd58-51e56534cb23-kube-api-access-mjmsl\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.182720 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.182747 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.182777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.182846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.187516 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.188448 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.196085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.196305 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb83c723-2f1b-419a-bd58-51e56534cb23-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.201566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmsl\" (UniqueName: \"kubernetes.io/projected/eb83c723-2f1b-419a-bd58-51e56534cb23-kube-api-access-mjmsl\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb83c723-2f1b-419a-bd58-51e56534cb23\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.332710 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.864380 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:02:44 crc kubenswrapper[4853]: I0127 19:02:44.913767 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb83c723-2f1b-419a-bd58-51e56534cb23","Type":"ContainerStarted","Data":"8fe2b0d319babf02ca2e876bd1fe349a801e0fa788364a73bb77ca46eb74a9c4"} Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.073476 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.074181 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.078527 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.079177 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.926712 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb83c723-2f1b-419a-bd58-51e56534cb23","Type":"ContainerStarted","Data":"31a7f8b3af7369359113c9f081e8c058b64c16af01ab3aa8cd7f21de22c534e6"} Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.927153 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.933473 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:02:45 crc kubenswrapper[4853]: I0127 19:02:45.942777 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.942759051 podStartE2EDuration="2.942759051s" podCreationTimestamp="2026-01-27 19:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:45.942156164 +0000 UTC m=+1208.404699067" watchObservedRunningTime="2026-01-27 19:02:45.942759051 +0000 UTC m=+1208.405301964" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.108718 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-gndzd"] Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.132417 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.184456 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-gndzd"] Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.239278 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.239399 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.239451 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.239474 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.239504 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-config\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.239552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8lp\" (UniqueName: \"kubernetes.io/projected/a296c295-f710-476c-bca3-f75ef11ba83c-kube-api-access-mf8lp\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.341327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8lp\" (UniqueName: \"kubernetes.io/projected/a296c295-f710-476c-bca3-f75ef11ba83c-kube-api-access-mf8lp\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.341668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.341841 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.341978 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.342091 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.342250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-config\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.342924 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.343015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.343224 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.343243 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-config\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.343247 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.361918 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8lp\" (UniqueName: \"kubernetes.io/projected/a296c295-f710-476c-bca3-f75ef11ba83c-kube-api-access-mf8lp\") pod \"dnsmasq-dns-89c5cd4d5-gndzd\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:46 crc kubenswrapper[4853]: I0127 19:02:46.473597 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:47 crc kubenswrapper[4853]: I0127 19:02:47.026659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-gndzd"] Jan 27 19:02:47 crc kubenswrapper[4853]: W0127 19:02:47.028603 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda296c295_f710_476c_bca3_f75ef11ba83c.slice/crio-ec0ba1fef729e7be90f9d86b865983f5c0b2d17b8d4fe79fe36377152c97d84f WatchSource:0}: Error finding container ec0ba1fef729e7be90f9d86b865983f5c0b2d17b8d4fe79fe36377152c97d84f: Status 404 returned error can't find the container with id ec0ba1fef729e7be90f9d86b865983f5c0b2d17b8d4fe79fe36377152c97d84f Jan 27 19:02:47 crc kubenswrapper[4853]: I0127 19:02:47.953748 4853 generic.go:334] "Generic (PLEG): container finished" podID="a296c295-f710-476c-bca3-f75ef11ba83c" containerID="146b0afa7f42b784e0f674978f8a31a27992a23ddddf15cf01461e3c39e20e2f" exitCode=0 Jan 27 19:02:47 crc kubenswrapper[4853]: I0127 19:02:47.953844 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" event={"ID":"a296c295-f710-476c-bca3-f75ef11ba83c","Type":"ContainerDied","Data":"146b0afa7f42b784e0f674978f8a31a27992a23ddddf15cf01461e3c39e20e2f"} Jan 27 19:02:47 crc kubenswrapper[4853]: I0127 19:02:47.954306 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" event={"ID":"a296c295-f710-476c-bca3-f75ef11ba83c","Type":"ContainerStarted","Data":"ec0ba1fef729e7be90f9d86b865983f5c0b2d17b8d4fe79fe36377152c97d84f"} Jan 27 19:02:48 crc kubenswrapper[4853]: I0127 19:02:48.162475 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:48 crc kubenswrapper[4853]: I0127 19:02:48.162829 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-central-agent" containerID="cri-o://e8a275a4ad0eb7c9886fae3010e96c0fab11926f8af3a276cc2e63014318e505" gracePeriod=30 Jan 27 19:02:48 crc kubenswrapper[4853]: I0127 19:02:48.162854 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-notification-agent" containerID="cri-o://1b3cbd52f811282d0fd7afa0af60ba1c4d5f9e925ae116fcd608add0467516bd" gracePeriod=30 Jan 27 19:02:48 crc kubenswrapper[4853]: I0127 19:02:48.162870 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="sg-core" containerID="cri-o://31ee881b974de2fd68078c6099f6c7d7d308596e0ddb00c35ccb1b19afd2ddb5" gracePeriod=30 Jan 27 19:02:48 crc kubenswrapper[4853]: I0127 19:02:48.163002 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="proxy-httpd" containerID="cri-o://16a01a5cde5230cf61baabf5562f2f8b761ca80f16ea6827bb8a6d9efc6f2997" gracePeriod=30 Jan 27 19:02:48 crc kubenswrapper[4853]: I0127 19:02:48.553074 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.012748 4853 generic.go:334] "Generic (PLEG): container finished" podID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerID="16a01a5cde5230cf61baabf5562f2f8b761ca80f16ea6827bb8a6d9efc6f2997" exitCode=0 Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.012782 4853 generic.go:334] "Generic (PLEG): container finished" podID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerID="31ee881b974de2fd68078c6099f6c7d7d308596e0ddb00c35ccb1b19afd2ddb5" exitCode=2 Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.012791 4853 generic.go:334] "Generic (PLEG): container finished" podID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerID="e8a275a4ad0eb7c9886fae3010e96c0fab11926f8af3a276cc2e63014318e505" exitCode=0 Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.012779 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerDied","Data":"16a01a5cde5230cf61baabf5562f2f8b761ca80f16ea6827bb8a6d9efc6f2997"} Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.012842 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerDied","Data":"31ee881b974de2fd68078c6099f6c7d7d308596e0ddb00c35ccb1b19afd2ddb5"} Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.012867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerDied","Data":"e8a275a4ad0eb7c9886fae3010e96c0fab11926f8af3a276cc2e63014318e505"} Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.015163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" event={"ID":"a296c295-f710-476c-bca3-f75ef11ba83c","Type":"ContainerStarted","Data":"968f92507be4339009a031d484a1e5fa4513ccd2846c13e642268134e25fa315"} Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.015236 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-log" containerID="cri-o://15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de" gracePeriod=30 Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.015780 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-api" containerID="cri-o://a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6" gracePeriod=30 Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.146291 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" podStartSLOduration=3.1462705890000002 podStartE2EDuration="3.146270589s" podCreationTimestamp="2026-01-27 19:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:49.141949615 +0000 UTC m=+1211.604492498" watchObservedRunningTime="2026-01-27 19:02:49.146270589 +0000 UTC m=+1211.608813472" Jan 27 19:02:49 crc kubenswrapper[4853]: I0127 19:02:49.332846 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.047106 4853 generic.go:334] "Generic (PLEG): container finished" podID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerID="1b3cbd52f811282d0fd7afa0af60ba1c4d5f9e925ae116fcd608add0467516bd" exitCode=0 Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.047536 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerDied","Data":"1b3cbd52f811282d0fd7afa0af60ba1c4d5f9e925ae116fcd608add0467516bd"} Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.051790 4853 generic.go:334] "Generic (PLEG): container finished" podID="065ad886-043b-4744-8d1f-ba3895202feb" containerID="15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de" exitCode=143 Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.052187 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"065ad886-043b-4744-8d1f-ba3895202feb","Type":"ContainerDied","Data":"15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de"} Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.052435 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.226957 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338143 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-run-httpd\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338217 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pphn2\" (UniqueName: \"kubernetes.io/projected/9827c9fd-b99a-4921-9680-9d34d2d1cd00-kube-api-access-pphn2\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338271 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-combined-ca-bundle\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338304 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-sg-core-conf-yaml\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338341 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-log-httpd\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338369 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-config-data\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338440 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-scripts\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.338559 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-ceilometer-tls-certs\") pod \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\" (UID: \"9827c9fd-b99a-4921-9680-9d34d2d1cd00\") " Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.339818 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.340051 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.359308 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9827c9fd-b99a-4921-9680-9d34d2d1cd00-kube-api-access-pphn2" (OuterVolumeSpecName: "kube-api-access-pphn2") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "kube-api-access-pphn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.359325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-scripts" (OuterVolumeSpecName: "scripts") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.371427 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.422554 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.435443 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440876 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440912 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pphn2\" (UniqueName: \"kubernetes.io/projected/9827c9fd-b99a-4921-9680-9d34d2d1cd00-kube-api-access-pphn2\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440928 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440942 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440953 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9827c9fd-b99a-4921-9680-9d34d2d1cd00-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440964 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.440974 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.441687 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-config-data" (OuterVolumeSpecName: "config-data") pod "9827c9fd-b99a-4921-9680-9d34d2d1cd00" (UID: "9827c9fd-b99a-4921-9680-9d34d2d1cd00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:50 crc kubenswrapper[4853]: I0127 19:02:50.542944 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9827c9fd-b99a-4921-9680-9d34d2d1cd00-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.063999 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.064206 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9827c9fd-b99a-4921-9680-9d34d2d1cd00","Type":"ContainerDied","Data":"8286cbe692dd93a1b0bf9ba4dc4af0ae3dfc08de58fdaa78b10bf987d9655bdc"} Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.064441 4853 scope.go:117] "RemoveContainer" containerID="16a01a5cde5230cf61baabf5562f2f8b761ca80f16ea6827bb8a6d9efc6f2997" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.086758 4853 scope.go:117] "RemoveContainer" containerID="31ee881b974de2fd68078c6099f6c7d7d308596e0ddb00c35ccb1b19afd2ddb5" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.099586 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.110215 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.123966 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:51 crc kubenswrapper[4853]: E0127 19:02:51.124424 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="proxy-httpd" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124441 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="proxy-httpd" Jan 27 19:02:51 crc kubenswrapper[4853]: E0127 19:02:51.124454 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-central-agent" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124463 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-central-agent" Jan 27 19:02:51 crc kubenswrapper[4853]: E0127 19:02:51.124477 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="sg-core" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124483 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="sg-core" Jan 27 19:02:51 crc kubenswrapper[4853]: E0127 19:02:51.124496 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-notification-agent" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124503 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-notification-agent" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124692 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-central-agent" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124704 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="proxy-httpd" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124716 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="sg-core" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.124734 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" containerName="ceilometer-notification-agent" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.126850 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.130834 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.131188 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.131669 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.139215 4853 scope.go:117] "RemoveContainer" containerID="1b3cbd52f811282d0fd7afa0af60ba1c4d5f9e925ae116fcd608add0467516bd" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.160838 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.173426 4853 scope.go:117] "RemoveContainer" containerID="e8a275a4ad0eb7c9886fae3010e96c0fab11926f8af3a276cc2e63014318e505" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.262473 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vttp9\" (UniqueName: \"kubernetes.io/projected/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-kube-api-access-vttp9\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.262525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-scripts\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.262547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.262869 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-run-httpd\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.262943 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-config-data\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.263020 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.263181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-log-httpd\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.263222 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365284 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vttp9\" (UniqueName: \"kubernetes.io/projected/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-kube-api-access-vttp9\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-scripts\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365438 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-run-httpd\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365464 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-config-data\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365494 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365530 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-log-httpd\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.365548 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.366198 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-run-httpd\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.366566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-log-httpd\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.370846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.370865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.371338 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-config-data\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.371719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.376923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-scripts\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.381663 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vttp9\" (UniqueName: \"kubernetes.io/projected/03c6fb37-6ad9-412a-b0fc-851c7b5e4a89-kube-api-access-vttp9\") pod \"ceilometer-0\" (UID: \"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89\") " pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.461562 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:51 crc kubenswrapper[4853]: I0127 19:02:51.921998 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.072209 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89","Type":"ContainerStarted","Data":"5f92c2d6ccbfd632059fa772888f3964a5f4480f34be51706224de9153dfdcca"} Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.125913 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9827c9fd-b99a-4921-9680-9d34d2d1cd00" path="/var/lib/kubelet/pods/9827c9fd-b99a-4921-9680-9d34d2d1cd00/volumes" Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.773699 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.896728 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/065ad886-043b-4744-8d1f-ba3895202feb-logs\") pod \"065ad886-043b-4744-8d1f-ba3895202feb\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.896795 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-combined-ca-bundle\") pod \"065ad886-043b-4744-8d1f-ba3895202feb\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.896878 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhsv4\" (UniqueName: \"kubernetes.io/projected/065ad886-043b-4744-8d1f-ba3895202feb-kube-api-access-jhsv4\") pod \"065ad886-043b-4744-8d1f-ba3895202feb\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.896943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-config-data\") pod \"065ad886-043b-4744-8d1f-ba3895202feb\" (UID: \"065ad886-043b-4744-8d1f-ba3895202feb\") " Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.899137 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/065ad886-043b-4744-8d1f-ba3895202feb-logs" (OuterVolumeSpecName: "logs") pod "065ad886-043b-4744-8d1f-ba3895202feb" (UID: "065ad886-043b-4744-8d1f-ba3895202feb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.917466 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065ad886-043b-4744-8d1f-ba3895202feb-kube-api-access-jhsv4" (OuterVolumeSpecName: "kube-api-access-jhsv4") pod "065ad886-043b-4744-8d1f-ba3895202feb" (UID: "065ad886-043b-4744-8d1f-ba3895202feb"). InnerVolumeSpecName "kube-api-access-jhsv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.947913 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "065ad886-043b-4744-8d1f-ba3895202feb" (UID: "065ad886-043b-4744-8d1f-ba3895202feb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:52 crc kubenswrapper[4853]: I0127 19:02:52.951193 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-config-data" (OuterVolumeSpecName: "config-data") pod "065ad886-043b-4744-8d1f-ba3895202feb" (UID: "065ad886-043b-4744-8d1f-ba3895202feb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:52.999985 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/065ad886-043b-4744-8d1f-ba3895202feb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.000025 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.000038 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhsv4\" (UniqueName: \"kubernetes.io/projected/065ad886-043b-4744-8d1f-ba3895202feb-kube-api-access-jhsv4\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.000050 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/065ad886-043b-4744-8d1f-ba3895202feb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.088360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89","Type":"ContainerStarted","Data":"1640d0725251e67b571d77cf34fe01c46cccd1bf66353a59b22f9b252d05598d"} Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.091615 4853 generic.go:334] "Generic (PLEG): container finished" podID="065ad886-043b-4744-8d1f-ba3895202feb" containerID="a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6" exitCode=0 Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.091690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"065ad886-043b-4744-8d1f-ba3895202feb","Type":"ContainerDied","Data":"a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6"} Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.091723 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"065ad886-043b-4744-8d1f-ba3895202feb","Type":"ContainerDied","Data":"68bed728633080746d1ed974c315990ca8d81d452ee4e7f03af318e53a963450"} Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.091760 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.091775 4853 scope.go:117] "RemoveContainer" containerID="a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.126707 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.130458 4853 scope.go:117] "RemoveContainer" containerID="15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.139751 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.158636 4853 scope.go:117] "RemoveContainer" containerID="a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6" Jan 27 19:02:53 crc kubenswrapper[4853]: E0127 19:02:53.159428 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6\": container with ID starting with a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6 not found: ID does not exist" containerID="a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.159465 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6"} err="failed to get container status \"a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6\": rpc error: code = NotFound desc = could not find container \"a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6\": container with ID starting with a0e18195f1d7041cdc173226832842fa144fb064474f6232a7170bb2e4c960a6 not found: ID does not exist" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.159497 4853 scope.go:117] "RemoveContainer" containerID="15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de" Jan 27 19:02:53 crc kubenswrapper[4853]: E0127 19:02:53.159987 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de\": container with ID starting with 15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de not found: ID does not exist" containerID="15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.160013 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de"} err="failed to get container status \"15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de\": rpc error: code = NotFound desc = could not find container \"15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de\": container with ID starting with 15b5e21e33a42a4412a438ef8584f067f5e593313cf91b4df0c3b48dd07261de not found: ID does not exist" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.162369 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:53 crc kubenswrapper[4853]: E0127 19:02:53.163653 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-log" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.163679 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-log" Jan 27 19:02:53 crc kubenswrapper[4853]: E0127 19:02:53.163733 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-api" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.163743 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-api" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.164003 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-log" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.164042 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="065ad886-043b-4744-8d1f-ba3895202feb" containerName="nova-api-api" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.165309 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.173255 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.173623 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.173682 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.173946 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.306709 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-config-data\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.306841 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpwn\" (UniqueName: \"kubernetes.io/projected/a2269207-0899-4310-8d69-bd1eed74fa7e-kube-api-access-bmpwn\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.306896 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2269207-0899-4310-8d69-bd1eed74fa7e-logs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.306945 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.306981 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.307017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.409391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2269207-0899-4310-8d69-bd1eed74fa7e-logs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.409495 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.409537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.409570 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.409626 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-config-data\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.409701 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmpwn\" (UniqueName: \"kubernetes.io/projected/a2269207-0899-4310-8d69-bd1eed74fa7e-kube-api-access-bmpwn\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.410487 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2269207-0899-4310-8d69-bd1eed74fa7e-logs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.414453 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.414899 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-config-data\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.414984 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.415078 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.429435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmpwn\" (UniqueName: \"kubernetes.io/projected/a2269207-0899-4310-8d69-bd1eed74fa7e-kube-api-access-bmpwn\") pod \"nova-api-0\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " pod="openstack/nova-api-0" Jan 27 19:02:53 crc kubenswrapper[4853]: I0127 19:02:53.500224 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:02:54 crc kubenswrapper[4853]: I0127 19:02:54.010550 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:02:54 crc kubenswrapper[4853]: I0127 19:02:54.104045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89","Type":"ContainerStarted","Data":"9540add2a001052c434c5f42020c21d99d3b1eaa288f74c2cf58e3ed49712b09"} Jan 27 19:02:54 crc kubenswrapper[4853]: I0127 19:02:54.105520 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2269207-0899-4310-8d69-bd1eed74fa7e","Type":"ContainerStarted","Data":"4df23d89e1a70755bc803118ea3219fe802ecd883ab4378add735b69446be44f"} Jan 27 19:02:54 crc kubenswrapper[4853]: I0127 19:02:54.128600 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065ad886-043b-4744-8d1f-ba3895202feb" path="/var/lib/kubelet/pods/065ad886-043b-4744-8d1f-ba3895202feb/volumes" Jan 27 19:02:54 crc kubenswrapper[4853]: I0127 19:02:54.333716 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:54 crc kubenswrapper[4853]: I0127 19:02:54.351961 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.119139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2269207-0899-4310-8d69-bd1eed74fa7e","Type":"ContainerStarted","Data":"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf"} Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.119467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2269207-0899-4310-8d69-bd1eed74fa7e","Type":"ContainerStarted","Data":"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685"} Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.120814 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89","Type":"ContainerStarted","Data":"c84ba9eafe24c320adabc1aea261ef72293f14fe86407542f1c35d61eb15779f"} Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.143356 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.143340816 podStartE2EDuration="2.143340816s" podCreationTimestamp="2026-01-27 19:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:55.138368212 +0000 UTC m=+1217.600911105" watchObservedRunningTime="2026-01-27 19:02:55.143340816 +0000 UTC m=+1217.605883699" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.146663 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.334941 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8l4gk"] Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.337218 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.340260 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.340527 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.348753 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8l4gk"] Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.448377 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.448439 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-config-data\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.448675 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-scripts\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.448886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7bx6\" (UniqueName: \"kubernetes.io/projected/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-kube-api-access-m7bx6\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.551349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7bx6\" (UniqueName: \"kubernetes.io/projected/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-kube-api-access-m7bx6\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.551536 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.551595 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-config-data\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.551672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-scripts\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.558086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.563068 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-scripts\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.563772 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-config-data\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.568924 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7bx6\" (UniqueName: \"kubernetes.io/projected/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-kube-api-access-m7bx6\") pod \"nova-cell1-cell-mapping-8l4gk\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:55 crc kubenswrapper[4853]: I0127 19:02:55.664571 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:02:56 crc kubenswrapper[4853]: I0127 19:02:56.133641 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03c6fb37-6ad9-412a-b0fc-851c7b5e4a89","Type":"ContainerStarted","Data":"749f7f25d2d26c4796d03c5bea6a4d58c46f948827fc504eee9b136893dbaca4"} Jan 27 19:02:56 crc kubenswrapper[4853]: I0127 19:02:56.164733 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.211681149 podStartE2EDuration="5.164711289s" podCreationTimestamp="2026-01-27 19:02:51 +0000 UTC" firstStartedPulling="2026-01-27 19:02:51.927879633 +0000 UTC m=+1214.390422516" lastFinishedPulling="2026-01-27 19:02:55.880909773 +0000 UTC m=+1218.343452656" observedRunningTime="2026-01-27 19:02:56.154772653 +0000 UTC m=+1218.617315536" watchObservedRunningTime="2026-01-27 19:02:56.164711289 +0000 UTC m=+1218.627254172" Jan 27 19:02:56 crc kubenswrapper[4853]: I0127 19:02:56.194280 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8l4gk"] Jan 27 19:02:56 crc kubenswrapper[4853]: W0127 19:02:56.194802 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc1a2d0_3e4a_4d4e_988e_96762f754b6a.slice/crio-62dfd1e02004fe631a8b1da683d51de58bb13c4de205aab89c5045fe7ed3a32d WatchSource:0}: Error finding container 62dfd1e02004fe631a8b1da683d51de58bb13c4de205aab89c5045fe7ed3a32d: Status 404 returned error can't find the container with id 62dfd1e02004fe631a8b1da683d51de58bb13c4de205aab89c5045fe7ed3a32d Jan 27 19:02:56 crc kubenswrapper[4853]: I0127 19:02:56.476225 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:02:56 crc kubenswrapper[4853]: I0127 19:02:56.538158 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-k7brt"] Jan 27 19:02:56 crc kubenswrapper[4853]: I0127 19:02:56.538458 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerName="dnsmasq-dns" containerID="cri-o://8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f" gracePeriod=10 Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.022161 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.086564 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj8sk\" (UniqueName: \"kubernetes.io/projected/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-kube-api-access-dj8sk\") pod \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.086662 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-nb\") pod \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.086704 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-swift-storage-0\") pod \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.086746 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-sb\") pod \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.086815 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-svc\") pod \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.087610 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-config\") pod \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\" (UID: \"38b68e30-68ad-4dce-befc-98fd9c6aa1b6\") " Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.107391 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-kube-api-access-dj8sk" (OuterVolumeSpecName: "kube-api-access-dj8sk") pod "38b68e30-68ad-4dce-befc-98fd9c6aa1b6" (UID: "38b68e30-68ad-4dce-befc-98fd9c6aa1b6"). InnerVolumeSpecName "kube-api-access-dj8sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.144827 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38b68e30-68ad-4dce-befc-98fd9c6aa1b6" (UID: "38b68e30-68ad-4dce-befc-98fd9c6aa1b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.158577 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38b68e30-68ad-4dce-befc-98fd9c6aa1b6" (UID: "38b68e30-68ad-4dce-befc-98fd9c6aa1b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.159018 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38b68e30-68ad-4dce-befc-98fd9c6aa1b6" (UID: "38b68e30-68ad-4dce-befc-98fd9c6aa1b6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.160322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-config" (OuterVolumeSpecName: "config") pod "38b68e30-68ad-4dce-befc-98fd9c6aa1b6" (UID: "38b68e30-68ad-4dce-befc-98fd9c6aa1b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.163670 4853 generic.go:334] "Generic (PLEG): container finished" podID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerID="8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f" exitCode=0 Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.163937 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.164728 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" event={"ID":"38b68e30-68ad-4dce-befc-98fd9c6aa1b6","Type":"ContainerDied","Data":"8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f"} Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.164812 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-k7brt" event={"ID":"38b68e30-68ad-4dce-befc-98fd9c6aa1b6","Type":"ContainerDied","Data":"d2426e36d0e5e26b94198c92468cec6b297a270a43ccc2672a071ef12ec37147"} Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.164848 4853 scope.go:117] "RemoveContainer" containerID="8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.169482 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38b68e30-68ad-4dce-befc-98fd9c6aa1b6" (UID: "38b68e30-68ad-4dce-befc-98fd9c6aa1b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.169965 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8l4gk" event={"ID":"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a","Type":"ContainerStarted","Data":"bacbe340baa0e6dab1d9ce942e2fa624357095e9948c34ca5b78385fe14abebf"} Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.169991 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8l4gk" event={"ID":"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a","Type":"ContainerStarted","Data":"62dfd1e02004fe631a8b1da683d51de58bb13c4de205aab89c5045fe7ed3a32d"} Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.171073 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.192011 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.192054 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj8sk\" (UniqueName: \"kubernetes.io/projected/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-kube-api-access-dj8sk\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.192073 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.192087 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.192098 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.192108 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b68e30-68ad-4dce-befc-98fd9c6aa1b6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.194508 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8l4gk" podStartSLOduration=2.194481185 podStartE2EDuration="2.194481185s" podCreationTimestamp="2026-01-27 19:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:57.189506752 +0000 UTC m=+1219.652049635" watchObservedRunningTime="2026-01-27 19:02:57.194481185 +0000 UTC m=+1219.657024068" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.263014 4853 scope.go:117] "RemoveContainer" containerID="957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.283986 4853 scope.go:117] "RemoveContainer" containerID="8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f" Jan 27 19:02:57 crc kubenswrapper[4853]: E0127 19:02:57.285045 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f\": container with ID starting with 8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f not found: ID does not exist" containerID="8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.285146 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f"} err="failed to get container status \"8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f\": rpc error: code = NotFound desc = could not find container \"8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f\": container with ID starting with 8aa81261f8381ed5a1d97142922f605fa8049824ec834bf50f3bcafdc049f68f not found: ID does not exist" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.285222 4853 scope.go:117] "RemoveContainer" containerID="957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1" Jan 27 19:02:57 crc kubenswrapper[4853]: E0127 19:02:57.285658 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1\": container with ID starting with 957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1 not found: ID does not exist" containerID="957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.285734 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1"} err="failed to get container status \"957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1\": rpc error: code = NotFound desc = could not find container \"957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1\": container with ID starting with 957caeaedc3d6b2203814fd677fb215f6ac38008988d362ddff5e3bb167521f1 not found: ID does not exist" Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.494507 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-k7brt"] Jan 27 19:02:57 crc kubenswrapper[4853]: I0127 19:02:57.508647 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-k7brt"] Jan 27 19:02:58 crc kubenswrapper[4853]: I0127 19:02:58.124513 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" path="/var/lib/kubelet/pods/38b68e30-68ad-4dce-befc-98fd9c6aa1b6/volumes" Jan 27 19:03:01 crc kubenswrapper[4853]: I0127 19:03:01.205664 4853 generic.go:334] "Generic (PLEG): container finished" podID="8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" containerID="bacbe340baa0e6dab1d9ce942e2fa624357095e9948c34ca5b78385fe14abebf" exitCode=0 Jan 27 19:03:01 crc kubenswrapper[4853]: I0127 19:03:01.205759 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8l4gk" event={"ID":"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a","Type":"ContainerDied","Data":"bacbe340baa0e6dab1d9ce942e2fa624357095e9948c34ca5b78385fe14abebf"} Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.658883 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.712513 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-config-data\") pod \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.712587 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-combined-ca-bundle\") pod \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.712706 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7bx6\" (UniqueName: \"kubernetes.io/projected/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-kube-api-access-m7bx6\") pod \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.712901 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-scripts\") pod \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\" (UID: \"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a\") " Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.719252 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-scripts" (OuterVolumeSpecName: "scripts") pod "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" (UID: "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.719321 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-kube-api-access-m7bx6" (OuterVolumeSpecName: "kube-api-access-m7bx6") pod "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" (UID: "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a"). InnerVolumeSpecName "kube-api-access-m7bx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.742378 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-config-data" (OuterVolumeSpecName: "config-data") pod "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" (UID: "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.744950 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" (UID: "8dc1a2d0-3e4a-4d4e-988e-96762f754b6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.814921 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.815170 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.815242 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:02 crc kubenswrapper[4853]: I0127 19:03:02.815314 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7bx6\" (UniqueName: \"kubernetes.io/projected/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a-kube-api-access-m7bx6\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.229575 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8l4gk" event={"ID":"8dc1a2d0-3e4a-4d4e-988e-96762f754b6a","Type":"ContainerDied","Data":"62dfd1e02004fe631a8b1da683d51de58bb13c4de205aab89c5045fe7ed3a32d"} Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.230075 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dfd1e02004fe631a8b1da683d51de58bb13c4de205aab89c5045fe7ed3a32d" Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.229639 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8l4gk" Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.421312 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.422000 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f1a81537-2e83-406a-9194-7a6362a7e874" containerName="nova-scheduler-scheduler" containerID="cri-o://b1d92aa89f2e6fa8aa143e761119c0f4688d2e4e53cd08fca0796483700f42dc" gracePeriod=30 Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.436170 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.436548 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-log" containerID="cri-o://831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685" gracePeriod=30 Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.436657 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-api" containerID="cri-o://7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf" gracePeriod=30 Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.519712 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.519991 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-log" containerID="cri-o://d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320" gracePeriod=30 Jan 27 19:03:03 crc kubenswrapper[4853]: I0127 19:03:03.520186 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-metadata" containerID="cri-o://07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741" gracePeriod=30 Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.026500 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.138202 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-internal-tls-certs\") pod \"a2269207-0899-4310-8d69-bd1eed74fa7e\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.138380 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-public-tls-certs\") pod \"a2269207-0899-4310-8d69-bd1eed74fa7e\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.138511 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmpwn\" (UniqueName: \"kubernetes.io/projected/a2269207-0899-4310-8d69-bd1eed74fa7e-kube-api-access-bmpwn\") pod \"a2269207-0899-4310-8d69-bd1eed74fa7e\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.138631 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2269207-0899-4310-8d69-bd1eed74fa7e-logs\") pod \"a2269207-0899-4310-8d69-bd1eed74fa7e\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.138689 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-combined-ca-bundle\") pod \"a2269207-0899-4310-8d69-bd1eed74fa7e\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.139042 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-config-data\") pod \"a2269207-0899-4310-8d69-bd1eed74fa7e\" (UID: \"a2269207-0899-4310-8d69-bd1eed74fa7e\") " Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.139102 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2269207-0899-4310-8d69-bd1eed74fa7e-logs" (OuterVolumeSpecName: "logs") pod "a2269207-0899-4310-8d69-bd1eed74fa7e" (UID: "a2269207-0899-4310-8d69-bd1eed74fa7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.139566 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2269207-0899-4310-8d69-bd1eed74fa7e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.145585 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2269207-0899-4310-8d69-bd1eed74fa7e-kube-api-access-bmpwn" (OuterVolumeSpecName: "kube-api-access-bmpwn") pod "a2269207-0899-4310-8d69-bd1eed74fa7e" (UID: "a2269207-0899-4310-8d69-bd1eed74fa7e"). InnerVolumeSpecName "kube-api-access-bmpwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.172669 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-config-data" (OuterVolumeSpecName: "config-data") pod "a2269207-0899-4310-8d69-bd1eed74fa7e" (UID: "a2269207-0899-4310-8d69-bd1eed74fa7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.182752 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2269207-0899-4310-8d69-bd1eed74fa7e" (UID: "a2269207-0899-4310-8d69-bd1eed74fa7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.196842 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a2269207-0899-4310-8d69-bd1eed74fa7e" (UID: "a2269207-0899-4310-8d69-bd1eed74fa7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.197650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a2269207-0899-4310-8d69-bd1eed74fa7e" (UID: "a2269207-0899-4310-8d69-bd1eed74fa7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.240475 4853 generic.go:334] "Generic (PLEG): container finished" podID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerID="d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320" exitCode=143 Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.240588 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c6df56d-109f-4ab4-bb18-35b70eb1beaf","Type":"ContainerDied","Data":"d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320"} Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.241934 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.241967 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmpwn\" (UniqueName: \"kubernetes.io/projected/a2269207-0899-4310-8d69-bd1eed74fa7e-kube-api-access-bmpwn\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.241983 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.241998 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.242009 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2269207-0899-4310-8d69-bd1eed74fa7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244568 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerID="7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf" exitCode=0 Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244610 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerID="831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685" exitCode=143 Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244632 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244634 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2269207-0899-4310-8d69-bd1eed74fa7e","Type":"ContainerDied","Data":"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf"} Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244779 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2269207-0899-4310-8d69-bd1eed74fa7e","Type":"ContainerDied","Data":"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685"} Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2269207-0899-4310-8d69-bd1eed74fa7e","Type":"ContainerDied","Data":"4df23d89e1a70755bc803118ea3219fe802ecd883ab4378add735b69446be44f"} Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.244815 4853 scope.go:117] "RemoveContainer" containerID="7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.272296 4853 scope.go:117] "RemoveContainer" containerID="831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.282498 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.308452 4853 scope.go:117] "RemoveContainer" containerID="7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf" Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.308911 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf\": container with ID starting with 7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf not found: ID does not exist" containerID="7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.308942 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf"} err="failed to get container status \"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf\": rpc error: code = NotFound desc = could not find container \"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf\": container with ID starting with 7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf not found: ID does not exist" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.308964 4853 scope.go:117] "RemoveContainer" containerID="831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685" Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.309500 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685\": container with ID starting with 831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685 not found: ID does not exist" containerID="831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.309535 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685"} err="failed to get container status \"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685\": rpc error: code = NotFound desc = could not find container \"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685\": container with ID starting with 831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685 not found: ID does not exist" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.309554 4853 scope.go:117] "RemoveContainer" containerID="7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.309850 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf"} err="failed to get container status \"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf\": rpc error: code = NotFound desc = could not find container \"7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf\": container with ID starting with 7d33bcefbe65df1271aff22e6a9a215128f2d1327424bcb3913d3ebc4e0221bf not found: ID does not exist" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.309883 4853 scope.go:117] "RemoveContainer" containerID="831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.309953 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.310138 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685"} err="failed to get container status \"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685\": rpc error: code = NotFound desc = could not find container \"831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685\": container with ID starting with 831c863d27eb49606be07c7dc9e38b06033ed38f6a353d492fcb868381614685 not found: ID does not exist" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321092 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.321537 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" containerName="nova-manage" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321559 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" containerName="nova-manage" Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.321572 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerName="init" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321578 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerName="init" Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.321596 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerName="dnsmasq-dns" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321602 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerName="dnsmasq-dns" Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.321622 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-api" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321628 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-api" Jan 27 19:03:04 crc kubenswrapper[4853]: E0127 19:03:04.321644 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-log" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321652 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-log" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321847 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-log" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321870 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b68e30-68ad-4dce-befc-98fd9c6aa1b6" containerName="dnsmasq-dns" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321883 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" containerName="nova-api-api" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.321891 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" containerName="nova-manage" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.323135 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.326846 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.326920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.327057 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.346593 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.444816 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q8s7\" (UniqueName: \"kubernetes.io/projected/a649598f-69be-4de2-9a79-b5581f1fc8f9-kube-api-access-4q8s7\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.444887 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.445150 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.445324 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a649598f-69be-4de2-9a79-b5581f1fc8f9-logs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.445403 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-config-data\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.445524 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.547409 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.547501 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a649598f-69be-4de2-9a79-b5581f1fc8f9-logs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.547544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-config-data\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.547586 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.547666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q8s7\" (UniqueName: \"kubernetes.io/projected/a649598f-69be-4de2-9a79-b5581f1fc8f9-kube-api-access-4q8s7\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.547858 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.548199 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a649598f-69be-4de2-9a79-b5581f1fc8f9-logs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.551094 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.551441 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.551571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.552796 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a649598f-69be-4de2-9a79-b5581f1fc8f9-config-data\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.565133 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q8s7\" (UniqueName: \"kubernetes.io/projected/a649598f-69be-4de2-9a79-b5581f1fc8f9-kube-api-access-4q8s7\") pod \"nova-api-0\" (UID: \"a649598f-69be-4de2-9a79-b5581f1fc8f9\") " pod="openstack/nova-api-0" Jan 27 19:03:04 crc kubenswrapper[4853]: I0127 19:03:04.652361 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.105503 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:05 crc kubenswrapper[4853]: W0127 19:03:05.121310 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda649598f_69be_4de2_9a79_b5581f1fc8f9.slice/crio-47830da3fa21d83476df458f4e69266fb30e67817878297cde391c72a548f2fd WatchSource:0}: Error finding container 47830da3fa21d83476df458f4e69266fb30e67817878297cde391c72a548f2fd: Status 404 returned error can't find the container with id 47830da3fa21d83476df458f4e69266fb30e67817878297cde391c72a548f2fd Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.263775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a649598f-69be-4de2-9a79-b5581f1fc8f9","Type":"ContainerStarted","Data":"47830da3fa21d83476df458f4e69266fb30e67817878297cde391c72a548f2fd"} Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.277544 4853 generic.go:334] "Generic (PLEG): container finished" podID="f1a81537-2e83-406a-9194-7a6362a7e874" containerID="b1d92aa89f2e6fa8aa143e761119c0f4688d2e4e53cd08fca0796483700f42dc" exitCode=0 Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.277598 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1a81537-2e83-406a-9194-7a6362a7e874","Type":"ContainerDied","Data":"b1d92aa89f2e6fa8aa143e761119c0f4688d2e4e53cd08fca0796483700f42dc"} Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.496890 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.572008 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-combined-ca-bundle\") pod \"f1a81537-2e83-406a-9194-7a6362a7e874\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.572059 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-config-data\") pod \"f1a81537-2e83-406a-9194-7a6362a7e874\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.572210 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlxp5\" (UniqueName: \"kubernetes.io/projected/f1a81537-2e83-406a-9194-7a6362a7e874-kube-api-access-xlxp5\") pod \"f1a81537-2e83-406a-9194-7a6362a7e874\" (UID: \"f1a81537-2e83-406a-9194-7a6362a7e874\") " Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.576094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a81537-2e83-406a-9194-7a6362a7e874-kube-api-access-xlxp5" (OuterVolumeSpecName: "kube-api-access-xlxp5") pod "f1a81537-2e83-406a-9194-7a6362a7e874" (UID: "f1a81537-2e83-406a-9194-7a6362a7e874"). InnerVolumeSpecName "kube-api-access-xlxp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.603993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-config-data" (OuterVolumeSpecName: "config-data") pod "f1a81537-2e83-406a-9194-7a6362a7e874" (UID: "f1a81537-2e83-406a-9194-7a6362a7e874"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.605039 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1a81537-2e83-406a-9194-7a6362a7e874" (UID: "f1a81537-2e83-406a-9194-7a6362a7e874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.674136 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.674166 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a81537-2e83-406a-9194-7a6362a7e874-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:05 crc kubenswrapper[4853]: I0127 19:03:05.674177 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlxp5\" (UniqueName: \"kubernetes.io/projected/f1a81537-2e83-406a-9194-7a6362a7e874-kube-api-access-xlxp5\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.124598 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2269207-0899-4310-8d69-bd1eed74fa7e" path="/var/lib/kubelet/pods/a2269207-0899-4310-8d69-bd1eed74fa7e/volumes" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.287706 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.287698 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1a81537-2e83-406a-9194-7a6362a7e874","Type":"ContainerDied","Data":"3062efe491f175e5b1b2e0419d827e5066d96341200e1eaab57c98cad4fba26c"} Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.287870 4853 scope.go:117] "RemoveContainer" containerID="b1d92aa89f2e6fa8aa143e761119c0f4688d2e4e53cd08fca0796483700f42dc" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.290467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a649598f-69be-4de2-9a79-b5581f1fc8f9","Type":"ContainerStarted","Data":"972da4d8b9c2370ec9e58a258ce65dd4cae3b5a1f19a598fe4bc6ed50edfd5ee"} Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.290517 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a649598f-69be-4de2-9a79-b5581f1fc8f9","Type":"ContainerStarted","Data":"f497b46116476c53ad13f8a66310e05af08b69f213ae4d16e7bd9a6a69b4e22b"} Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.315681 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.315662412 podStartE2EDuration="2.315662412s" podCreationTimestamp="2026-01-27 19:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:06.312017247 +0000 UTC m=+1228.774560130" watchObservedRunningTime="2026-01-27 19:03:06.315662412 +0000 UTC m=+1228.778205295" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.339613 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.356757 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.377367 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:06 crc kubenswrapper[4853]: E0127 19:03:06.377837 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a81537-2e83-406a-9194-7a6362a7e874" containerName="nova-scheduler-scheduler" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.377861 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a81537-2e83-406a-9194-7a6362a7e874" containerName="nova-scheduler-scheduler" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.378088 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a81537-2e83-406a-9194-7a6362a7e874" containerName="nova-scheduler-scheduler" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.378844 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.380714 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.393433 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.489949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpv8z\" (UniqueName: \"kubernetes.io/projected/28787444-e1bd-43c7-a22c-f3ce3678986d-kube-api-access-bpv8z\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.490322 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28787444-e1bd-43c7-a22c-f3ce3678986d-config-data\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.490402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28787444-e1bd-43c7-a22c-f3ce3678986d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.591556 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpv8z\" (UniqueName: \"kubernetes.io/projected/28787444-e1bd-43c7-a22c-f3ce3678986d-kube-api-access-bpv8z\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.591648 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28787444-e1bd-43c7-a22c-f3ce3678986d-config-data\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.591685 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28787444-e1bd-43c7-a22c-f3ce3678986d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.597735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28787444-e1bd-43c7-a22c-f3ce3678986d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.598364 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28787444-e1bd-43c7-a22c-f3ce3678986d-config-data\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.618991 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpv8z\" (UniqueName: \"kubernetes.io/projected/28787444-e1bd-43c7-a22c-f3ce3678986d-kube-api-access-bpv8z\") pod \"nova-scheduler-0\" (UID: \"28787444-e1bd-43c7-a22c-f3ce3678986d\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.663223 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:48374->10.217.0.198:8775: read: connection reset by peer" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.663256 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:48386->10.217.0.198:8775: read: connection reset by peer" Jan 27 19:03:06 crc kubenswrapper[4853]: I0127 19:03:06.697541 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.135757 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.204546 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-nova-metadata-tls-certs\") pod \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.204959 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-combined-ca-bundle\") pod \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.205330 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-logs\") pod \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.205394 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-config-data\") pod \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.205418 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k25m\" (UniqueName: \"kubernetes.io/projected/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-kube-api-access-9k25m\") pod \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\" (UID: \"4c6df56d-109f-4ab4-bb18-35b70eb1beaf\") " Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.206361 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-logs" (OuterVolumeSpecName: "logs") pod "4c6df56d-109f-4ab4-bb18-35b70eb1beaf" (UID: "4c6df56d-109f-4ab4-bb18-35b70eb1beaf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.212194 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-kube-api-access-9k25m" (OuterVolumeSpecName: "kube-api-access-9k25m") pod "4c6df56d-109f-4ab4-bb18-35b70eb1beaf" (UID: "4c6df56d-109f-4ab4-bb18-35b70eb1beaf"). InnerVolumeSpecName "kube-api-access-9k25m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.243106 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c6df56d-109f-4ab4-bb18-35b70eb1beaf" (UID: "4c6df56d-109f-4ab4-bb18-35b70eb1beaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.246648 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.249613 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-config-data" (OuterVolumeSpecName: "config-data") pod "4c6df56d-109f-4ab4-bb18-35b70eb1beaf" (UID: "4c6df56d-109f-4ab4-bb18-35b70eb1beaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.283988 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4c6df56d-109f-4ab4-bb18-35b70eb1beaf" (UID: "4c6df56d-109f-4ab4-bb18-35b70eb1beaf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.302195 4853 generic.go:334] "Generic (PLEG): container finished" podID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerID="07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741" exitCode=0 Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.302259 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c6df56d-109f-4ab4-bb18-35b70eb1beaf","Type":"ContainerDied","Data":"07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741"} Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.302301 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.302323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c6df56d-109f-4ab4-bb18-35b70eb1beaf","Type":"ContainerDied","Data":"f95d7b7c09270bc7401455ed4ae2d9ebdbce97c43ce40475be911e0072a7d600"} Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.302344 4853 scope.go:117] "RemoveContainer" containerID="07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.303470 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28787444-e1bd-43c7-a22c-f3ce3678986d","Type":"ContainerStarted","Data":"409db5094bbd71449254c3e739d22e4981dd7a08def097f13610c514441250c0"} Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.307279 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.307305 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.307316 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k25m\" (UniqueName: \"kubernetes.io/projected/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-kube-api-access-9k25m\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.307325 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.307334 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6df56d-109f-4ab4-bb18-35b70eb1beaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.350108 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.355090 4853 scope.go:117] "RemoveContainer" containerID="d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.361542 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.372783 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:07 crc kubenswrapper[4853]: E0127 19:03:07.373420 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-metadata" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.373444 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-metadata" Jan 27 19:03:07 crc kubenswrapper[4853]: E0127 19:03:07.373488 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-log" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.373496 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-log" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.373936 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-log" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.373952 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" containerName="nova-metadata-metadata" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.375150 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.378745 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.378797 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.400717 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.408726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlt2\" (UniqueName: \"kubernetes.io/projected/1fb249c2-c72b-4f50-bee6-8d461fc5b613-kube-api-access-thlt2\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.408874 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.408925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.408962 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-config-data\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.409003 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fb249c2-c72b-4f50-bee6-8d461fc5b613-logs\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.413245 4853 scope.go:117] "RemoveContainer" containerID="07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741" Jan 27 19:03:07 crc kubenswrapper[4853]: E0127 19:03:07.413757 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741\": container with ID starting with 07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741 not found: ID does not exist" containerID="07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.413800 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741"} err="failed to get container status \"07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741\": rpc error: code = NotFound desc = could not find container \"07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741\": container with ID starting with 07b1c380d9184b8391fbe9bb04c29907aa3c6e2610c6671cecbb76b723875741 not found: ID does not exist" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.413832 4853 scope.go:117] "RemoveContainer" containerID="d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320" Jan 27 19:03:07 crc kubenswrapper[4853]: E0127 19:03:07.414281 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320\": container with ID starting with d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320 not found: ID does not exist" containerID="d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.414309 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320"} err="failed to get container status \"d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320\": rpc error: code = NotFound desc = could not find container \"d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320\": container with ID starting with d9fed5231318b4f36f209797f574f411cc1c0ef5b1c9b6a99fc1799e0e6a9320 not found: ID does not exist" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.510499 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-config-data\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.510565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fb249c2-c72b-4f50-bee6-8d461fc5b613-logs\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.510624 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thlt2\" (UniqueName: \"kubernetes.io/projected/1fb249c2-c72b-4f50-bee6-8d461fc5b613-kube-api-access-thlt2\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.510712 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.510754 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.511157 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fb249c2-c72b-4f50-bee6-8d461fc5b613-logs\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.513550 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.514337 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-config-data\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.515851 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb249c2-c72b-4f50-bee6-8d461fc5b613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.527205 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thlt2\" (UniqueName: \"kubernetes.io/projected/1fb249c2-c72b-4f50-bee6-8d461fc5b613-kube-api-access-thlt2\") pod \"nova-metadata-0\" (UID: \"1fb249c2-c72b-4f50-bee6-8d461fc5b613\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4853]: I0127 19:03:07.701643 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:08 crc kubenswrapper[4853]: I0127 19:03:08.124033 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6df56d-109f-4ab4-bb18-35b70eb1beaf" path="/var/lib/kubelet/pods/4c6df56d-109f-4ab4-bb18-35b70eb1beaf/volumes" Jan 27 19:03:08 crc kubenswrapper[4853]: I0127 19:03:08.124693 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a81537-2e83-406a-9194-7a6362a7e874" path="/var/lib/kubelet/pods/f1a81537-2e83-406a-9194-7a6362a7e874/volumes" Jan 27 19:03:08 crc kubenswrapper[4853]: I0127 19:03:08.168612 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:08 crc kubenswrapper[4853]: I0127 19:03:08.315261 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28787444-e1bd-43c7-a22c-f3ce3678986d","Type":"ContainerStarted","Data":"de431b89bbd0fd51ed4bd34345ecaf59922f2cd481b7ca9090c881aa63ef4229"} Jan 27 19:03:08 crc kubenswrapper[4853]: I0127 19:03:08.317389 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fb249c2-c72b-4f50-bee6-8d461fc5b613","Type":"ContainerStarted","Data":"22b38595a2218572306765aa61024b43d914032abd4340899938689b5d9dab3a"} Jan 27 19:03:08 crc kubenswrapper[4853]: I0127 19:03:08.337205 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.337183639 podStartE2EDuration="2.337183639s" podCreationTimestamp="2026-01-27 19:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:08.328508129 +0000 UTC m=+1230.791051022" watchObservedRunningTime="2026-01-27 19:03:08.337183639 +0000 UTC m=+1230.799726522" Jan 27 19:03:09 crc kubenswrapper[4853]: I0127 19:03:09.336005 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fb249c2-c72b-4f50-bee6-8d461fc5b613","Type":"ContainerStarted","Data":"e05fe02a13641d1c430c4396478e64be558a182320990002d01440501a876a8d"} Jan 27 19:03:09 crc kubenswrapper[4853]: I0127 19:03:09.336548 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fb249c2-c72b-4f50-bee6-8d461fc5b613","Type":"ContainerStarted","Data":"7f69e023d4f4f4185c5a85130941437f46454c9bce89417dd6cc5f22d1148982"} Jan 27 19:03:09 crc kubenswrapper[4853]: I0127 19:03:09.371753 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.371717943 podStartE2EDuration="2.371717943s" podCreationTimestamp="2026-01-27 19:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:09.352635473 +0000 UTC m=+1231.815178366" watchObservedRunningTime="2026-01-27 19:03:09.371717943 +0000 UTC m=+1231.834260866" Jan 27 19:03:11 crc kubenswrapper[4853]: I0127 19:03:11.698607 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 19:03:12 crc kubenswrapper[4853]: I0127 19:03:12.702734 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:03:12 crc kubenswrapper[4853]: I0127 19:03:12.703098 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4853]: I0127 19:03:14.652524 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:03:14 crc kubenswrapper[4853]: I0127 19:03:14.652908 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:03:15 crc kubenswrapper[4853]: I0127 19:03:15.667281 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a649598f-69be-4de2-9a79-b5581f1fc8f9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:15 crc kubenswrapper[4853]: I0127 19:03:15.667364 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a649598f-69be-4de2-9a79-b5581f1fc8f9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:16 crc kubenswrapper[4853]: I0127 19:03:16.698199 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 19:03:16 crc kubenswrapper[4853]: I0127 19:03:16.723599 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 19:03:17 crc kubenswrapper[4853]: I0127 19:03:17.446724 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 19:03:17 crc kubenswrapper[4853]: I0127 19:03:17.702095 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:03:17 crc kubenswrapper[4853]: I0127 19:03:17.704706 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:03:18 crc kubenswrapper[4853]: I0127 19:03:18.713287 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1fb249c2-c72b-4f50-bee6-8d461fc5b613" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:18 crc kubenswrapper[4853]: I0127 19:03:18.713335 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1fb249c2-c72b-4f50-bee6-8d461fc5b613" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:21 crc kubenswrapper[4853]: I0127 19:03:21.476158 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 19:03:24 crc kubenswrapper[4853]: I0127 19:03:24.658581 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4853]: I0127 19:03:24.659171 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4853]: I0127 19:03:24.659475 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4853]: I0127 19:03:24.659559 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4853]: I0127 19:03:24.664795 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4853]: I0127 19:03:24.665885 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:03:27 crc kubenswrapper[4853]: I0127 19:03:27.707567 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:03:27 crc kubenswrapper[4853]: I0127 19:03:27.707964 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:03:27 crc kubenswrapper[4853]: I0127 19:03:27.714809 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:03:27 crc kubenswrapper[4853]: I0127 19:03:27.718482 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:03:35 crc kubenswrapper[4853]: I0127 19:03:35.947194 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:03:36 crc kubenswrapper[4853]: I0127 19:03:36.920702 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:03:40 crc kubenswrapper[4853]: I0127 19:03:40.296649 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="rabbitmq" containerID="cri-o://2be1acb70ae90dbe469bc2eed2b8a6a575e3dc6e9f3900754b00c5cf61322054" gracePeriod=604796 Jan 27 19:03:41 crc kubenswrapper[4853]: I0127 19:03:41.106140 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="rabbitmq" containerID="cri-o://16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f" gracePeriod=604796 Jan 27 19:03:43 crc kubenswrapper[4853]: I0127 19:03:43.995373 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 27 19:03:44 crc kubenswrapper[4853]: I0127 19:03:44.341420 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Jan 27 19:03:46 crc kubenswrapper[4853]: I0127 19:03:46.713429 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f56570a-76ed-4182-b147-6288fa56d729" containerID="2be1acb70ae90dbe469bc2eed2b8a6a575e3dc6e9f3900754b00c5cf61322054" exitCode=0 Jan 27 19:03:46 crc kubenswrapper[4853]: I0127 19:03:46.713726 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56570a-76ed-4182-b147-6288fa56d729","Type":"ContainerDied","Data":"2be1acb70ae90dbe469bc2eed2b8a6a575e3dc6e9f3900754b00c5cf61322054"} Jan 27 19:03:46 crc kubenswrapper[4853]: I0127 19:03:46.871052 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.067710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-config-data\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.067817 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-plugins\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.067843 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-server-conf\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.067950 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-tls\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.067974 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvz2l\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-kube-api-access-pvz2l\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.068046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-plugins-conf\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.068310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.068334 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-erlang-cookie\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.068364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-confd\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.068394 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56570a-76ed-4182-b147-6288fa56d729-erlang-cookie-secret\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.068422 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56570a-76ed-4182-b147-6288fa56d729-pod-info\") pod \"2f56570a-76ed-4182-b147-6288fa56d729\" (UID: \"2f56570a-76ed-4182-b147-6288fa56d729\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.069053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.069503 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.070328 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.075851 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2f56570a-76ed-4182-b147-6288fa56d729-pod-info" (OuterVolumeSpecName: "pod-info") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.075854 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.075882 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-kube-api-access-pvz2l" (OuterVolumeSpecName: "kube-api-access-pvz2l") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "kube-api-access-pvz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.081373 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f56570a-76ed-4182-b147-6288fa56d729-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.083911 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.102235 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-config-data" (OuterVolumeSpecName: "config-data") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.119062 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-server-conf" (OuterVolumeSpecName: "server-conf") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170727 4853 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f56570a-76ed-4182-b147-6288fa56d729-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170762 4853 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f56570a-76ed-4182-b147-6288fa56d729-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170772 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170780 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170787 4853 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170795 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170804 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvz2l\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-kube-api-access-pvz2l\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170814 4853 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f56570a-76ed-4182-b147-6288fa56d729-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170844 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.170853 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.195700 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.199624 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2f56570a-76ed-4182-b147-6288fa56d729" (UID: "2f56570a-76ed-4182-b147-6288fa56d729"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.273029 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.273064 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f56570a-76ed-4182-b147-6288fa56d729-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.701572 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.726080 4853 generic.go:334] "Generic (PLEG): container finished" podID="525d82bf-e147-429f-8915-365aa48be00b" containerID="16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f" exitCode=0 Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.726162 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"525d82bf-e147-429f-8915-365aa48be00b","Type":"ContainerDied","Data":"16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f"} Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.726193 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"525d82bf-e147-429f-8915-365aa48be00b","Type":"ContainerDied","Data":"d4d4530f5706893cf0d82d13aedeb6830f182eb813ed14c637fa8560ea1ec824"} Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.726210 4853 scope.go:117] "RemoveContainer" containerID="16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.726325 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.729176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f56570a-76ed-4182-b147-6288fa56d729","Type":"ContainerDied","Data":"bf460227b7821518cff8fa0d310537b5b607be2b8bda55d379477cd3ee83de35"} Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.729259 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.783726 4853 scope.go:117] "RemoveContainer" containerID="c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.789212 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.816087 4853 scope.go:117] "RemoveContainer" containerID="16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f" Jan 27 19:03:47 crc kubenswrapper[4853]: E0127 19:03:47.816694 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f\": container with ID starting with 16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f not found: ID does not exist" containerID="16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.816738 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f"} err="failed to get container status \"16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f\": rpc error: code = NotFound desc = could not find container \"16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f\": container with ID starting with 16308c1cf27c46d6fb370c69f46eeb14efd66c0cccd787f67627c36b9cabf42f not found: ID does not exist" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.816764 4853 scope.go:117] "RemoveContainer" containerID="c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779" Jan 27 19:03:47 crc kubenswrapper[4853]: E0127 19:03:47.821633 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779\": container with ID starting with c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779 not found: ID does not exist" containerID="c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.821703 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779"} err="failed to get container status \"c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779\": rpc error: code = NotFound desc = could not find container \"c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779\": container with ID starting with c73bc95dd91720244c3ddaa31f873d1265c42efb60f1009c6891cbb6af55f779 not found: ID does not exist" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.821743 4853 scope.go:117] "RemoveContainer" containerID="2be1acb70ae90dbe469bc2eed2b8a6a575e3dc6e9f3900754b00c5cf61322054" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.822852 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839215 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:03:47 crc kubenswrapper[4853]: E0127 19:03:47.839651 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="rabbitmq" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839665 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="rabbitmq" Jan 27 19:03:47 crc kubenswrapper[4853]: E0127 19:03:47.839688 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="rabbitmq" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839693 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="rabbitmq" Jan 27 19:03:47 crc kubenswrapper[4853]: E0127 19:03:47.839709 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="setup-container" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839715 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="setup-container" Jan 27 19:03:47 crc kubenswrapper[4853]: E0127 19:03:47.839730 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="setup-container" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839737 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="setup-container" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839901 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="525d82bf-e147-429f-8915-365aa48be00b" containerName="rabbitmq" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.839920 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f56570a-76ed-4182-b147-6288fa56d729" containerName="rabbitmq" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.840950 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.845493 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.845618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.848761 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.849773 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.849926 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.849975 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.850049 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rmpnq" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.867320 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.871055 4853 scope.go:117] "RemoveContainer" containerID="86b479d5fa65aa088a46498e1a0e6a8485fbcdd68b9fc36a9d1790fcd627a629" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.892720 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-confd\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.892783 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-erlang-cookie\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.892893 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-tls\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.892926 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-server-conf\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.892989 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.893076 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/525d82bf-e147-429f-8915-365aa48be00b-erlang-cookie-secret\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.893152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnfq8\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-kube-api-access-dnfq8\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.893271 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/525d82bf-e147-429f-8915-365aa48be00b-pod-info\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.893350 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-config-data\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.893397 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-plugins-conf\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.893427 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-plugins\") pod \"525d82bf-e147-429f-8915-365aa48be00b\" (UID: \"525d82bf-e147-429f-8915-365aa48be00b\") " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.895205 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.900691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.900756 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.901696 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-kube-api-access-dnfq8" (OuterVolumeSpecName: "kube-api-access-dnfq8") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "kube-api-access-dnfq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.901838 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.901967 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525d82bf-e147-429f-8915-365aa48be00b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.903896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.915917 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/525d82bf-e147-429f-8915-365aa48be00b-pod-info" (OuterVolumeSpecName: "pod-info") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.951791 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-config-data" (OuterVolumeSpecName: "config-data") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.983469 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-server-conf" (OuterVolumeSpecName: "server-conf") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.995894 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.995950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996015 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996052 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996076 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996127 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996194 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996217 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfn9s\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-kube-api-access-xfn9s\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996241 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996260 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996305 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996315 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996323 4853 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996342 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996353 4853 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/525d82bf-e147-429f-8915-365aa48be00b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996362 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnfq8\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-kube-api-access-dnfq8\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996371 4853 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/525d82bf-e147-429f-8915-365aa48be00b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996379 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996388 4853 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/525d82bf-e147-429f-8915-365aa48be00b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:47 crc kubenswrapper[4853]: I0127 19:03:47.996396 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.015610 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.028921 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "525d82bf-e147-429f-8915-365aa48be00b" (UID: "525d82bf-e147-429f-8915-365aa48be00b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098282 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfn9s\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-kube-api-access-xfn9s\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098315 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098447 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098450 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098936 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.098982 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.099012 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.099056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.099238 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/525d82bf-e147-429f-8915-365aa48be00b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.099255 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.099934 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.100114 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.100414 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.100636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.100808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.104690 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.104862 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.106868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.112852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.124801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfn9s\" (UniqueName: \"kubernetes.io/projected/e1ba655b-12d8-4f9d-882f-1d7faeb1f65f-kube-api-access-xfn9s\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.128501 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f56570a-76ed-4182-b147-6288fa56d729" path="/var/lib/kubelet/pods/2f56570a-76ed-4182-b147-6288fa56d729/volumes" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.149703 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f\") " pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.157252 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lqmk7"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.161508 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.163596 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.172668 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.173704 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lqmk7"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.215306 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.215395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.215431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.215733 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.216051 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgfr\" (UniqueName: \"kubernetes.io/projected/5132f995-e4d4-4318-8e77-ff3f013fba10-kube-api-access-kdgfr\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.216157 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-config\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.216278 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.318018 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.318581 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.319282 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.319488 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.319730 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.318613 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.320996 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.321137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdgfr\" (UniqueName: \"kubernetes.io/projected/5132f995-e4d4-4318-8e77-ff3f013fba10-kube-api-access-kdgfr\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.321187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-config\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.321240 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.322013 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.322649 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.324038 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-config\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.357085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdgfr\" (UniqueName: \"kubernetes.io/projected/5132f995-e4d4-4318-8e77-ff3f013fba10-kube-api-access-kdgfr\") pod \"dnsmasq-dns-79bd4cc8c9-lqmk7\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.368585 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.385779 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.399663 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.402107 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406008 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406035 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406205 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406293 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rxrkc" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406429 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.406457 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.411971 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.423126 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.423196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.423246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jspzk\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-kube-api-access-jspzk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.423566 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.423621 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.429589 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.429657 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.429748 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6e38e4d-fbc2-4702-9767-e0376655776a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.429791 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.429950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.430092 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6e38e4d-fbc2-4702-9767-e0376655776a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.479791 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.531964 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jspzk\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-kube-api-access-jspzk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532232 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532260 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532347 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532374 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532435 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6e38e4d-fbc2-4702-9767-e0376655776a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532587 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.532691 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6e38e4d-fbc2-4702-9767-e0376655776a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.535287 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.535568 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.535906 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.536167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.536397 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6e38e4d-fbc2-4702-9767-e0376655776a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.536554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.539911 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6e38e4d-fbc2-4702-9767-e0376655776a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.541724 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.554438 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.554589 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jspzk\" (UniqueName: \"kubernetes.io/projected/b6e38e4d-fbc2-4702-9767-e0376655776a-kube-api-access-jspzk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.556129 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6e38e4d-fbc2-4702-9767-e0376655776a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.575591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6e38e4d-fbc2-4702-9767-e0376655776a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.758259 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.761382 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:03:48 crc kubenswrapper[4853]: I0127 19:03:48.852509 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lqmk7"] Jan 27 19:03:49 crc kubenswrapper[4853]: W0127 19:03:49.368856 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6e38e4d_fbc2_4702_9767_e0376655776a.slice/crio-2a6185deaed7c800f9eb9b49f66c6ac86daf91cbc5cb64d7dd5b635b87acdd75 WatchSource:0}: Error finding container 2a6185deaed7c800f9eb9b49f66c6ac86daf91cbc5cb64d7dd5b635b87acdd75: Status 404 returned error can't find the container with id 2a6185deaed7c800f9eb9b49f66c6ac86daf91cbc5cb64d7dd5b635b87acdd75 Jan 27 19:03:49 crc kubenswrapper[4853]: I0127 19:03:49.369236 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:03:49 crc kubenswrapper[4853]: I0127 19:03:49.824933 4853 generic.go:334] "Generic (PLEG): container finished" podID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerID="29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86" exitCode=0 Jan 27 19:03:49 crc kubenswrapper[4853]: I0127 19:03:49.825053 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" event={"ID":"5132f995-e4d4-4318-8e77-ff3f013fba10","Type":"ContainerDied","Data":"29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86"} Jan 27 19:03:49 crc kubenswrapper[4853]: I0127 19:03:49.825135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" event={"ID":"5132f995-e4d4-4318-8e77-ff3f013fba10","Type":"ContainerStarted","Data":"bbfdd684de15625d606622d735de39c9a9e8b89ebf7a5f22d26ac9f069a43391"} Jan 27 19:03:49 crc kubenswrapper[4853]: I0127 19:03:49.826337 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6e38e4d-fbc2-4702-9767-e0376655776a","Type":"ContainerStarted","Data":"2a6185deaed7c800f9eb9b49f66c6ac86daf91cbc5cb64d7dd5b635b87acdd75"} Jan 27 19:03:49 crc kubenswrapper[4853]: I0127 19:03:49.827955 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f","Type":"ContainerStarted","Data":"52f789d394ddf017ab6111469dd3bb56e67e55f210912b7b7fd3edd7b79241ce"} Jan 27 19:03:50 crc kubenswrapper[4853]: I0127 19:03:50.128824 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525d82bf-e147-429f-8915-365aa48be00b" path="/var/lib/kubelet/pods/525d82bf-e147-429f-8915-365aa48be00b/volumes" Jan 27 19:03:50 crc kubenswrapper[4853]: I0127 19:03:50.838665 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" event={"ID":"5132f995-e4d4-4318-8e77-ff3f013fba10","Type":"ContainerStarted","Data":"6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08"} Jan 27 19:03:50 crc kubenswrapper[4853]: I0127 19:03:50.838849 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:50 crc kubenswrapper[4853]: I0127 19:03:50.864967 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" podStartSLOduration=2.864947316 podStartE2EDuration="2.864947316s" podCreationTimestamp="2026-01-27 19:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:50.857064259 +0000 UTC m=+1273.319607152" watchObservedRunningTime="2026-01-27 19:03:50.864947316 +0000 UTC m=+1273.327490199" Jan 27 19:03:51 crc kubenswrapper[4853]: I0127 19:03:51.847429 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6e38e4d-fbc2-4702-9767-e0376655776a","Type":"ContainerStarted","Data":"e847dfbf87bee2419e9b8cf57628672871db14dc2dddd71d15481c5c2ac6c19d"} Jan 27 19:03:51 crc kubenswrapper[4853]: I0127 19:03:51.848916 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f","Type":"ContainerStarted","Data":"b5c2b729b40100da7f389d1243f31572c968f7a0ad2d40fb4cec16b44d78a48c"} Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.481737 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.536756 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-gndzd"] Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.537020 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" containerName="dnsmasq-dns" containerID="cri-o://968f92507be4339009a031d484a1e5fa4513ccd2846c13e642268134e25fa315" gracePeriod=10 Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.703746 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-hjhl4"] Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.758773 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-hjhl4"] Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.758946 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868327 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-config\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868443 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868725 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62x2s\" (UniqueName: \"kubernetes.io/projected/7225b878-e91a-4d57-8f13-19de93bd506d-kube-api-access-62x2s\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868864 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-dns-svc\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.868953 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.918273 4853 generic.go:334] "Generic (PLEG): container finished" podID="a296c295-f710-476c-bca3-f75ef11ba83c" containerID="968f92507be4339009a031d484a1e5fa4513ccd2846c13e642268134e25fa315" exitCode=0 Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.918320 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" event={"ID":"a296c295-f710-476c-bca3-f75ef11ba83c","Type":"ContainerDied","Data":"968f92507be4339009a031d484a1e5fa4513ccd2846c13e642268134e25fa315"} Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.971553 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.971600 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.971629 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62x2s\" (UniqueName: \"kubernetes.io/projected/7225b878-e91a-4d57-8f13-19de93bd506d-kube-api-access-62x2s\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.971666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.971691 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-dns-svc\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.971709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.975808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.976436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-config\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.978150 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.978213 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-dns-svc\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.978297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.979488 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:58 crc kubenswrapper[4853]: I0127 19:03:58.979615 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7225b878-e91a-4d57-8f13-19de93bd506d-config\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.003426 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62x2s\" (UniqueName: \"kubernetes.io/projected/7225b878-e91a-4d57-8f13-19de93bd506d-kube-api-access-62x2s\") pod \"dnsmasq-dns-55478c4467-hjhl4\" (UID: \"7225b878-e91a-4d57-8f13-19de93bd506d\") " pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.078967 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.088027 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.181592 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-svc\") pod \"a296c295-f710-476c-bca3-f75ef11ba83c\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.182067 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-swift-storage-0\") pod \"a296c295-f710-476c-bca3-f75ef11ba83c\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.182156 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-config\") pod \"a296c295-f710-476c-bca3-f75ef11ba83c\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.182307 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8lp\" (UniqueName: \"kubernetes.io/projected/a296c295-f710-476c-bca3-f75ef11ba83c-kube-api-access-mf8lp\") pod \"a296c295-f710-476c-bca3-f75ef11ba83c\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.182325 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-sb\") pod \"a296c295-f710-476c-bca3-f75ef11ba83c\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.182379 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-nb\") pod \"a296c295-f710-476c-bca3-f75ef11ba83c\" (UID: \"a296c295-f710-476c-bca3-f75ef11ba83c\") " Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.189686 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a296c295-f710-476c-bca3-f75ef11ba83c-kube-api-access-mf8lp" (OuterVolumeSpecName: "kube-api-access-mf8lp") pod "a296c295-f710-476c-bca3-f75ef11ba83c" (UID: "a296c295-f710-476c-bca3-f75ef11ba83c"). InnerVolumeSpecName "kube-api-access-mf8lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.260006 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a296c295-f710-476c-bca3-f75ef11ba83c" (UID: "a296c295-f710-476c-bca3-f75ef11ba83c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.268363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a296c295-f710-476c-bca3-f75ef11ba83c" (UID: "a296c295-f710-476c-bca3-f75ef11ba83c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.268656 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-config" (OuterVolumeSpecName: "config") pod "a296c295-f710-476c-bca3-f75ef11ba83c" (UID: "a296c295-f710-476c-bca3-f75ef11ba83c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.272301 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a296c295-f710-476c-bca3-f75ef11ba83c" (UID: "a296c295-f710-476c-bca3-f75ef11ba83c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.284916 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.284944 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.284975 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.284984 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8lp\" (UniqueName: \"kubernetes.io/projected/a296c295-f710-476c-bca3-f75ef11ba83c-kube-api-access-mf8lp\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.284994 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.316065 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a296c295-f710-476c-bca3-f75ef11ba83c" (UID: "a296c295-f710-476c-bca3-f75ef11ba83c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.388154 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a296c295-f710-476c-bca3-f75ef11ba83c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.569396 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-hjhl4"] Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.929504 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" event={"ID":"a296c295-f710-476c-bca3-f75ef11ba83c","Type":"ContainerDied","Data":"ec0ba1fef729e7be90f9d86b865983f5c0b2d17b8d4fe79fe36377152c97d84f"} Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.929573 4853 scope.go:117] "RemoveContainer" containerID="968f92507be4339009a031d484a1e5fa4513ccd2846c13e642268134e25fa315" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.929604 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-gndzd" Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.931502 4853 generic.go:334] "Generic (PLEG): container finished" podID="7225b878-e91a-4d57-8f13-19de93bd506d" containerID="8bcbc84706a55fc81e8aba8eda80b766332191e9c648fc061fe6446f3cca2f44" exitCode=0 Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.931590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" event={"ID":"7225b878-e91a-4d57-8f13-19de93bd506d","Type":"ContainerDied","Data":"8bcbc84706a55fc81e8aba8eda80b766332191e9c648fc061fe6446f3cca2f44"} Jan 27 19:03:59 crc kubenswrapper[4853]: I0127 19:03:59.931743 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" event={"ID":"7225b878-e91a-4d57-8f13-19de93bd506d","Type":"ContainerStarted","Data":"4ecc40795bfb3f614751edfe401e62b1f8d69e0697a74f555ae06b539204fe84"} Jan 27 19:04:00 crc kubenswrapper[4853]: I0127 19:04:00.116152 4853 scope.go:117] "RemoveContainer" containerID="146b0afa7f42b784e0f674978f8a31a27992a23ddddf15cf01461e3c39e20e2f" Jan 27 19:04:00 crc kubenswrapper[4853]: I0127 19:04:00.155936 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-gndzd"] Jan 27 19:04:00 crc kubenswrapper[4853]: I0127 19:04:00.165013 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-gndzd"] Jan 27 19:04:00 crc kubenswrapper[4853]: I0127 19:04:00.943532 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" event={"ID":"7225b878-e91a-4d57-8f13-19de93bd506d","Type":"ContainerStarted","Data":"add29f8247fea3266a80a424eac62b4c4ca2ee84dc7919dd13e6d5482c184227"} Jan 27 19:04:00 crc kubenswrapper[4853]: I0127 19:04:00.944287 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:04:00 crc kubenswrapper[4853]: I0127 19:04:00.966646 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" podStartSLOduration=2.966623738 podStartE2EDuration="2.966623738s" podCreationTimestamp="2026-01-27 19:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:04:00.960733029 +0000 UTC m=+1283.423275902" watchObservedRunningTime="2026-01-27 19:04:00.966623738 +0000 UTC m=+1283.429166621" Jan 27 19:04:02 crc kubenswrapper[4853]: I0127 19:04:02.125940 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" path="/var/lib/kubelet/pods/a296c295-f710-476c-bca3-f75ef11ba83c/volumes" Jan 27 19:04:05 crc kubenswrapper[4853]: I0127 19:04:05.541820 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:04:05 crc kubenswrapper[4853]: I0127 19:04:05.542305 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.090014 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-hjhl4" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.151290 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lqmk7"] Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.151529 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerName="dnsmasq-dns" containerID="cri-o://6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08" gracePeriod=10 Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.628338 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.799780 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-sb\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.799859 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-openstack-edpm-ipam\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.799988 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-swift-storage-0\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.800059 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-nb\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.800080 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-svc\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.800239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-config\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.800303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdgfr\" (UniqueName: \"kubernetes.io/projected/5132f995-e4d4-4318-8e77-ff3f013fba10-kube-api-access-kdgfr\") pod \"5132f995-e4d4-4318-8e77-ff3f013fba10\" (UID: \"5132f995-e4d4-4318-8e77-ff3f013fba10\") " Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.832402 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5132f995-e4d4-4318-8e77-ff3f013fba10-kube-api-access-kdgfr" (OuterVolumeSpecName: "kube-api-access-kdgfr") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "kube-api-access-kdgfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.905494 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdgfr\" (UniqueName: \"kubernetes.io/projected/5132f995-e4d4-4318-8e77-ff3f013fba10-kube-api-access-kdgfr\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.948140 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.952624 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-config" (OuterVolumeSpecName: "config") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.962415 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.974281 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.976916 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:09 crc kubenswrapper[4853]: I0127 19:04:09.980841 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5132f995-e4d4-4318-8e77-ff3f013fba10" (UID: "5132f995-e4d4-4318-8e77-ff3f013fba10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.008712 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.008765 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.008775 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.008786 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.008796 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.008804 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5132f995-e4d4-4318-8e77-ff3f013fba10-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.024729 4853 generic.go:334] "Generic (PLEG): container finished" podID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerID="6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08" exitCode=0 Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.024798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" event={"ID":"5132f995-e4d4-4318-8e77-ff3f013fba10","Type":"ContainerDied","Data":"6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08"} Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.024830 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" event={"ID":"5132f995-e4d4-4318-8e77-ff3f013fba10","Type":"ContainerDied","Data":"bbfdd684de15625d606622d735de39c9a9e8b89ebf7a5f22d26ac9f069a43391"} Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.024847 4853 scope.go:117] "RemoveContainer" containerID="6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.024984 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lqmk7" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.048064 4853 scope.go:117] "RemoveContainer" containerID="29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.061143 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lqmk7"] Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.070577 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lqmk7"] Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.078200 4853 scope.go:117] "RemoveContainer" containerID="6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08" Jan 27 19:04:10 crc kubenswrapper[4853]: E0127 19:04:10.078882 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08\": container with ID starting with 6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08 not found: ID does not exist" containerID="6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.078924 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08"} err="failed to get container status \"6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08\": rpc error: code = NotFound desc = could not find container \"6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08\": container with ID starting with 6d143cca44d4b1449b777dab15e559a1b611566279f5d32c6907b1a7a5945c08 not found: ID does not exist" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.078960 4853 scope.go:117] "RemoveContainer" containerID="29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86" Jan 27 19:04:10 crc kubenswrapper[4853]: E0127 19:04:10.079336 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86\": container with ID starting with 29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86 not found: ID does not exist" containerID="29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.079382 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86"} err="failed to get container status \"29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86\": rpc error: code = NotFound desc = could not find container \"29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86\": container with ID starting with 29b3384b0a88a46be56431ae90339d69ac8f8a0c4c139fb13c28558d7c9a7e86 not found: ID does not exist" Jan 27 19:04:10 crc kubenswrapper[4853]: I0127 19:04:10.126025 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" path="/var/lib/kubelet/pods/5132f995-e4d4-4318-8e77-ff3f013fba10/volumes" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.522773 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg"] Jan 27 19:04:22 crc kubenswrapper[4853]: E0127 19:04:22.523689 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerName="dnsmasq-dns" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.523703 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerName="dnsmasq-dns" Jan 27 19:04:22 crc kubenswrapper[4853]: E0127 19:04:22.523721 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" containerName="dnsmasq-dns" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.523727 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" containerName="dnsmasq-dns" Jan 27 19:04:22 crc kubenswrapper[4853]: E0127 19:04:22.523738 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" containerName="init" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.523744 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" containerName="init" Jan 27 19:04:22 crc kubenswrapper[4853]: E0127 19:04:22.523753 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerName="init" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.523758 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerName="init" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.523932 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a296c295-f710-476c-bca3-f75ef11ba83c" containerName="dnsmasq-dns" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.523948 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5132f995-e4d4-4318-8e77-ff3f013fba10" containerName="dnsmasq-dns" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.524607 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.527957 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.529017 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.529032 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.529873 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.540083 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg"] Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.541863 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724l6\" (UniqueName: \"kubernetes.io/projected/327b1d19-709e-4efa-b5b3-11513e5dbdac-kube-api-access-724l6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.542042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.542183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.542386 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.644033 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724l6\" (UniqueName: \"kubernetes.io/projected/327b1d19-709e-4efa-b5b3-11513e5dbdac-kube-api-access-724l6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.644101 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.644154 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.644229 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.649982 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.650387 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.662065 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.669494 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724l6\" (UniqueName: \"kubernetes.io/projected/327b1d19-709e-4efa-b5b3-11513e5dbdac-kube-api-access-724l6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:22 crc kubenswrapper[4853]: I0127 19:04:22.842340 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:23 crc kubenswrapper[4853]: I0127 19:04:23.150692 4853 generic.go:334] "Generic (PLEG): container finished" podID="e1ba655b-12d8-4f9d-882f-1d7faeb1f65f" containerID="b5c2b729b40100da7f389d1243f31572c968f7a0ad2d40fb4cec16b44d78a48c" exitCode=0 Jan 27 19:04:23 crc kubenswrapper[4853]: I0127 19:04:23.150804 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f","Type":"ContainerDied","Data":"b5c2b729b40100da7f389d1243f31572c968f7a0ad2d40fb4cec16b44d78a48c"} Jan 27 19:04:23 crc kubenswrapper[4853]: I0127 19:04:23.153962 4853 generic.go:334] "Generic (PLEG): container finished" podID="b6e38e4d-fbc2-4702-9767-e0376655776a" containerID="e847dfbf87bee2419e9b8cf57628672871db14dc2dddd71d15481c5c2ac6c19d" exitCode=0 Jan 27 19:04:23 crc kubenswrapper[4853]: I0127 19:04:23.154029 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6e38e4d-fbc2-4702-9767-e0376655776a","Type":"ContainerDied","Data":"e847dfbf87bee2419e9b8cf57628672871db14dc2dddd71d15481c5c2ac6c19d"} Jan 27 19:04:23 crc kubenswrapper[4853]: I0127 19:04:23.430825 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg"] Jan 27 19:04:23 crc kubenswrapper[4853]: I0127 19:04:23.488476 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.166836 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6e38e4d-fbc2-4702-9767-e0376655776a","Type":"ContainerStarted","Data":"11bc651871eca1bbf3e7f0856d86e186e33f3bf11304e064dfbffbe23572184b"} Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.167080 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.169732 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" event={"ID":"327b1d19-709e-4efa-b5b3-11513e5dbdac","Type":"ContainerStarted","Data":"32cf1dcfa79731eb40324d0386f49cd59ea9d7496e229ba0eef71a75c74820e5"} Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.172105 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1ba655b-12d8-4f9d-882f-1d7faeb1f65f","Type":"ContainerStarted","Data":"ade0884746dea0711b426c6ec530ce0de1e8af27f98522f1e6828d41e4f04cb6"} Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.172468 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.210791 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.210768846 podStartE2EDuration="36.210768846s" podCreationTimestamp="2026-01-27 19:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:04:24.193292444 +0000 UTC m=+1306.655835327" watchObservedRunningTime="2026-01-27 19:04:24.210768846 +0000 UTC m=+1306.673311729" Jan 27 19:04:24 crc kubenswrapper[4853]: I0127 19:04:24.216446 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.216427908 podStartE2EDuration="37.216427908s" podCreationTimestamp="2026-01-27 19:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:04:24.216251273 +0000 UTC m=+1306.678794166" watchObservedRunningTime="2026-01-27 19:04:24.216427908 +0000 UTC m=+1306.678970791" Jan 27 19:04:33 crc kubenswrapper[4853]: I0127 19:04:33.279370 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" event={"ID":"327b1d19-709e-4efa-b5b3-11513e5dbdac","Type":"ContainerStarted","Data":"5502b78e00d0f86327cabd31a0cae96f3bcf1818adab10c94ab29da7f412453e"} Jan 27 19:04:33 crc kubenswrapper[4853]: I0127 19:04:33.306635 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" podStartSLOduration=1.773524826 podStartE2EDuration="11.306616164s" podCreationTimestamp="2026-01-27 19:04:22 +0000 UTC" firstStartedPulling="2026-01-27 19:04:23.488013868 +0000 UTC m=+1305.950556761" lastFinishedPulling="2026-01-27 19:04:33.021105216 +0000 UTC m=+1315.483648099" observedRunningTime="2026-01-27 19:04:33.301201049 +0000 UTC m=+1315.763743932" watchObservedRunningTime="2026-01-27 19:04:33.306616164 +0000 UTC m=+1315.769159047" Jan 27 19:04:35 crc kubenswrapper[4853]: I0127 19:04:35.541731 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:04:35 crc kubenswrapper[4853]: I0127 19:04:35.542196 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:04:38 crc kubenswrapper[4853]: I0127 19:04:38.176333 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 19:04:38 crc kubenswrapper[4853]: I0127 19:04:38.764357 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:04:45 crc kubenswrapper[4853]: I0127 19:04:45.398474 4853 generic.go:334] "Generic (PLEG): container finished" podID="327b1d19-709e-4efa-b5b3-11513e5dbdac" containerID="5502b78e00d0f86327cabd31a0cae96f3bcf1818adab10c94ab29da7f412453e" exitCode=0 Jan 27 19:04:45 crc kubenswrapper[4853]: I0127 19:04:45.398556 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" event={"ID":"327b1d19-709e-4efa-b5b3-11513e5dbdac","Type":"ContainerDied","Data":"5502b78e00d0f86327cabd31a0cae96f3bcf1818adab10c94ab29da7f412453e"} Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.848731 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.986805 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-repo-setup-combined-ca-bundle\") pod \"327b1d19-709e-4efa-b5b3-11513e5dbdac\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.986900 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-ssh-key-openstack-edpm-ipam\") pod \"327b1d19-709e-4efa-b5b3-11513e5dbdac\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.986937 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-inventory\") pod \"327b1d19-709e-4efa-b5b3-11513e5dbdac\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.987054 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724l6\" (UniqueName: \"kubernetes.io/projected/327b1d19-709e-4efa-b5b3-11513e5dbdac-kube-api-access-724l6\") pod \"327b1d19-709e-4efa-b5b3-11513e5dbdac\" (UID: \"327b1d19-709e-4efa-b5b3-11513e5dbdac\") " Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.994699 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327b1d19-709e-4efa-b5b3-11513e5dbdac-kube-api-access-724l6" (OuterVolumeSpecName: "kube-api-access-724l6") pod "327b1d19-709e-4efa-b5b3-11513e5dbdac" (UID: "327b1d19-709e-4efa-b5b3-11513e5dbdac"). InnerVolumeSpecName "kube-api-access-724l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:46 crc kubenswrapper[4853]: I0127 19:04:46.995044 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "327b1d19-709e-4efa-b5b3-11513e5dbdac" (UID: "327b1d19-709e-4efa-b5b3-11513e5dbdac"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.035785 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "327b1d19-709e-4efa-b5b3-11513e5dbdac" (UID: "327b1d19-709e-4efa-b5b3-11513e5dbdac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.040512 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-inventory" (OuterVolumeSpecName: "inventory") pod "327b1d19-709e-4efa-b5b3-11513e5dbdac" (UID: "327b1d19-709e-4efa-b5b3-11513e5dbdac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.089182 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724l6\" (UniqueName: \"kubernetes.io/projected/327b1d19-709e-4efa-b5b3-11513e5dbdac-kube-api-access-724l6\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.089218 4853 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.089232 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.089244 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/327b1d19-709e-4efa-b5b3-11513e5dbdac-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.416928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" event={"ID":"327b1d19-709e-4efa-b5b3-11513e5dbdac","Type":"ContainerDied","Data":"32cf1dcfa79731eb40324d0386f49cd59ea9d7496e229ba0eef71a75c74820e5"} Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.416973 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cf1dcfa79731eb40324d0386f49cd59ea9d7496e229ba0eef71a75c74820e5" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.417038 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.498459 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9"] Jan 27 19:04:47 crc kubenswrapper[4853]: E0127 19:04:47.499035 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327b1d19-709e-4efa-b5b3-11513e5dbdac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.499061 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="327b1d19-709e-4efa-b5b3-11513e5dbdac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.499352 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="327b1d19-709e-4efa-b5b3-11513e5dbdac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.500143 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.503618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.503862 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.503977 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.504236 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.507293 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9"] Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.598232 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.598286 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.598375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr46f\" (UniqueName: \"kubernetes.io/projected/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-kube-api-access-rr46f\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.699886 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr46f\" (UniqueName: \"kubernetes.io/projected/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-kube-api-access-rr46f\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.700020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.700047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.704204 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.706578 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.725472 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr46f\" (UniqueName: \"kubernetes.io/projected/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-kube-api-access-rr46f\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hbp9\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:47 crc kubenswrapper[4853]: I0127 19:04:47.833219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:48 crc kubenswrapper[4853]: I0127 19:04:48.402561 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9"] Jan 27 19:04:48 crc kubenswrapper[4853]: I0127 19:04:48.426367 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" event={"ID":"c936a14f-519a-4f53-a09b-f7cb85bcdd6b","Type":"ContainerStarted","Data":"cbb85b6178d3989f4040c26d50093cdf15ea131fc0dc4522938321658872c1d6"} Jan 27 19:04:49 crc kubenswrapper[4853]: I0127 19:04:49.440343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" event={"ID":"c936a14f-519a-4f53-a09b-f7cb85bcdd6b","Type":"ContainerStarted","Data":"08f2b3224298178d18d2c34d3d27a07ababa0a932aa2bb628ae239f2cb1d0c15"} Jan 27 19:04:49 crc kubenswrapper[4853]: I0127 19:04:49.464547 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" podStartSLOduration=2.057767064 podStartE2EDuration="2.464524419s" podCreationTimestamp="2026-01-27 19:04:47 +0000 UTC" firstStartedPulling="2026-01-27 19:04:48.403979145 +0000 UTC m=+1330.866522018" lastFinishedPulling="2026-01-27 19:04:48.81073648 +0000 UTC m=+1331.273279373" observedRunningTime="2026-01-27 19:04:49.458778534 +0000 UTC m=+1331.921321427" watchObservedRunningTime="2026-01-27 19:04:49.464524419 +0000 UTC m=+1331.927067302" Jan 27 19:04:52 crc kubenswrapper[4853]: I0127 19:04:52.473106 4853 generic.go:334] "Generic (PLEG): container finished" podID="c936a14f-519a-4f53-a09b-f7cb85bcdd6b" containerID="08f2b3224298178d18d2c34d3d27a07ababa0a932aa2bb628ae239f2cb1d0c15" exitCode=0 Jan 27 19:04:52 crc kubenswrapper[4853]: I0127 19:04:52.473175 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" event={"ID":"c936a14f-519a-4f53-a09b-f7cb85bcdd6b","Type":"ContainerDied","Data":"08f2b3224298178d18d2c34d3d27a07ababa0a932aa2bb628ae239f2cb1d0c15"} Jan 27 19:04:53 crc kubenswrapper[4853]: I0127 19:04:53.897174 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.026709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-ssh-key-openstack-edpm-ipam\") pod \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.026876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr46f\" (UniqueName: \"kubernetes.io/projected/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-kube-api-access-rr46f\") pod \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.026906 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-inventory\") pod \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\" (UID: \"c936a14f-519a-4f53-a09b-f7cb85bcdd6b\") " Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.032485 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-kube-api-access-rr46f" (OuterVolumeSpecName: "kube-api-access-rr46f") pod "c936a14f-519a-4f53-a09b-f7cb85bcdd6b" (UID: "c936a14f-519a-4f53-a09b-f7cb85bcdd6b"). InnerVolumeSpecName "kube-api-access-rr46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.057476 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-inventory" (OuterVolumeSpecName: "inventory") pod "c936a14f-519a-4f53-a09b-f7cb85bcdd6b" (UID: "c936a14f-519a-4f53-a09b-f7cb85bcdd6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.059654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c936a14f-519a-4f53-a09b-f7cb85bcdd6b" (UID: "c936a14f-519a-4f53-a09b-f7cb85bcdd6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.128965 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.129010 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr46f\" (UniqueName: \"kubernetes.io/projected/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-kube-api-access-rr46f\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.129023 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c936a14f-519a-4f53-a09b-f7cb85bcdd6b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.494606 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" event={"ID":"c936a14f-519a-4f53-a09b-f7cb85bcdd6b","Type":"ContainerDied","Data":"cbb85b6178d3989f4040c26d50093cdf15ea131fc0dc4522938321658872c1d6"} Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.494644 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hbp9" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.494658 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb85b6178d3989f4040c26d50093cdf15ea131fc0dc4522938321658872c1d6" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.586669 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4"] Jan 27 19:04:54 crc kubenswrapper[4853]: E0127 19:04:54.587292 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c936a14f-519a-4f53-a09b-f7cb85bcdd6b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.587317 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c936a14f-519a-4f53-a09b-f7cb85bcdd6b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.587629 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c936a14f-519a-4f53-a09b-f7cb85bcdd6b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.588492 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.591879 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.591925 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.592099 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.592439 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.609996 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4"] Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.639608 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwc9\" (UniqueName: \"kubernetes.io/projected/7f4e6043-7a79-455d-97be-20aff374a38d-kube-api-access-ctwc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.639679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.639769 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.639870 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.741394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwc9\" (UniqueName: \"kubernetes.io/projected/7f4e6043-7a79-455d-97be-20aff374a38d-kube-api-access-ctwc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.741466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.741532 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.741614 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.746279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.746627 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.746728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.760712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwc9\" (UniqueName: \"kubernetes.io/projected/7f4e6043-7a79-455d-97be-20aff374a38d-kube-api-access-ctwc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:54 crc kubenswrapper[4853]: I0127 19:04:54.905945 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:04:55 crc kubenswrapper[4853]: I0127 19:04:55.437759 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4"] Jan 27 19:04:55 crc kubenswrapper[4853]: I0127 19:04:55.505372 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" event={"ID":"7f4e6043-7a79-455d-97be-20aff374a38d","Type":"ContainerStarted","Data":"10ee16ef75eec5412905549b1b1a3a02c66944928c1bcae430f88079d288f51e"} Jan 27 19:04:56 crc kubenswrapper[4853]: I0127 19:04:56.516275 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" event={"ID":"7f4e6043-7a79-455d-97be-20aff374a38d","Type":"ContainerStarted","Data":"bdfb9c0a9082da85ac91e10b00711efa10e949876e703c9694222def1ca08677"} Jan 27 19:04:56 crc kubenswrapper[4853]: I0127 19:04:56.533767 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" podStartSLOduration=2.032364739 podStartE2EDuration="2.533746408s" podCreationTimestamp="2026-01-27 19:04:54 +0000 UTC" firstStartedPulling="2026-01-27 19:04:55.441451773 +0000 UTC m=+1337.903994656" lastFinishedPulling="2026-01-27 19:04:55.942833442 +0000 UTC m=+1338.405376325" observedRunningTime="2026-01-27 19:04:56.533434819 +0000 UTC m=+1338.995977732" watchObservedRunningTime="2026-01-27 19:04:56.533746408 +0000 UTC m=+1338.996289291" Jan 27 19:05:05 crc kubenswrapper[4853]: I0127 19:05:05.541192 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:05:05 crc kubenswrapper[4853]: I0127 19:05:05.541486 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:05:05 crc kubenswrapper[4853]: I0127 19:05:05.541535 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:05:05 crc kubenswrapper[4853]: I0127 19:05:05.542292 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ebd72a88c0d92677bf2c3606656647e62120a28bd35c3672caa1084df04a23b"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:05:05 crc kubenswrapper[4853]: I0127 19:05:05.542357 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://2ebd72a88c0d92677bf2c3606656647e62120a28bd35c3672caa1084df04a23b" gracePeriod=600 Jan 27 19:05:06 crc kubenswrapper[4853]: I0127 19:05:06.613609 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="2ebd72a88c0d92677bf2c3606656647e62120a28bd35c3672caa1084df04a23b" exitCode=0 Jan 27 19:05:06 crc kubenswrapper[4853]: I0127 19:05:06.613700 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"2ebd72a88c0d92677bf2c3606656647e62120a28bd35c3672caa1084df04a23b"} Jan 27 19:05:06 crc kubenswrapper[4853]: I0127 19:05:06.614427 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1"} Jan 27 19:05:06 crc kubenswrapper[4853]: I0127 19:05:06.614464 4853 scope.go:117] "RemoveContainer" containerID="a0719d2d74e31dba5f0b13e64100839f15049069456c6041563b2a237f331790" Jan 27 19:05:20 crc kubenswrapper[4853]: I0127 19:05:20.483634 4853 scope.go:117] "RemoveContainer" containerID="bc103aaefc798f810fe57738521fc83593f035690bd59b57f63c73f3bbb18fff" Jan 27 19:05:20 crc kubenswrapper[4853]: I0127 19:05:20.515928 4853 scope.go:117] "RemoveContainer" containerID="481fa2bce2050b36791d91e8cd48702c369a31e0ccd7feed47ff91320ce00792" Jan 27 19:05:20 crc kubenswrapper[4853]: I0127 19:05:20.590630 4853 scope.go:117] "RemoveContainer" containerID="4c6da6e067f7205a7c767e08ae62eef284372400b9513006b3eb8e7dea609f41" Jan 27 19:06:20 crc kubenswrapper[4853]: I0127 19:06:20.724820 4853 scope.go:117] "RemoveContainer" containerID="95f7f716d21983ac00fd2ad48c591221d3265f4a4551f121bc2eb408014d170e" Jan 27 19:07:05 crc kubenswrapper[4853]: I0127 19:07:05.541378 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:07:05 crc kubenswrapper[4853]: I0127 19:07:05.541960 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:07:35 crc kubenswrapper[4853]: I0127 19:07:35.541882 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:07:35 crc kubenswrapper[4853]: I0127 19:07:35.542474 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:08:05 crc kubenswrapper[4853]: I0127 19:08:05.541949 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:08:05 crc kubenswrapper[4853]: I0127 19:08:05.543653 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:08:05 crc kubenswrapper[4853]: I0127 19:08:05.543792 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:08:05 crc kubenswrapper[4853]: I0127 19:08:05.544658 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:08:05 crc kubenswrapper[4853]: I0127 19:08:05.544807 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" gracePeriod=600 Jan 27 19:08:05 crc kubenswrapper[4853]: E0127 19:08:05.668456 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:08:06 crc kubenswrapper[4853]: I0127 19:08:06.215097 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" exitCode=0 Jan 27 19:08:06 crc kubenswrapper[4853]: I0127 19:08:06.215157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1"} Jan 27 19:08:06 crc kubenswrapper[4853]: I0127 19:08:06.215241 4853 scope.go:117] "RemoveContainer" containerID="2ebd72a88c0d92677bf2c3606656647e62120a28bd35c3672caa1084df04a23b" Jan 27 19:08:06 crc kubenswrapper[4853]: I0127 19:08:06.216046 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:08:06 crc kubenswrapper[4853]: E0127 19:08:06.216424 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:08:09 crc kubenswrapper[4853]: I0127 19:08:09.250780 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f4e6043-7a79-455d-97be-20aff374a38d" containerID="bdfb9c0a9082da85ac91e10b00711efa10e949876e703c9694222def1ca08677" exitCode=0 Jan 27 19:08:09 crc kubenswrapper[4853]: I0127 19:08:09.250881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" event={"ID":"7f4e6043-7a79-455d-97be-20aff374a38d","Type":"ContainerDied","Data":"bdfb9c0a9082da85ac91e10b00711efa10e949876e703c9694222def1ca08677"} Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.792810 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.820091 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctwc9\" (UniqueName: \"kubernetes.io/projected/7f4e6043-7a79-455d-97be-20aff374a38d-kube-api-access-ctwc9\") pod \"7f4e6043-7a79-455d-97be-20aff374a38d\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.820233 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-bootstrap-combined-ca-bundle\") pod \"7f4e6043-7a79-455d-97be-20aff374a38d\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.820311 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-ssh-key-openstack-edpm-ipam\") pod \"7f4e6043-7a79-455d-97be-20aff374a38d\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.820410 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-inventory\") pod \"7f4e6043-7a79-455d-97be-20aff374a38d\" (UID: \"7f4e6043-7a79-455d-97be-20aff374a38d\") " Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.826496 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7f4e6043-7a79-455d-97be-20aff374a38d" (UID: "7f4e6043-7a79-455d-97be-20aff374a38d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.826574 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4e6043-7a79-455d-97be-20aff374a38d-kube-api-access-ctwc9" (OuterVolumeSpecName: "kube-api-access-ctwc9") pod "7f4e6043-7a79-455d-97be-20aff374a38d" (UID: "7f4e6043-7a79-455d-97be-20aff374a38d"). InnerVolumeSpecName "kube-api-access-ctwc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.850255 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f4e6043-7a79-455d-97be-20aff374a38d" (UID: "7f4e6043-7a79-455d-97be-20aff374a38d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.852518 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-inventory" (OuterVolumeSpecName: "inventory") pod "7f4e6043-7a79-455d-97be-20aff374a38d" (UID: "7f4e6043-7a79-455d-97be-20aff374a38d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.921601 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctwc9\" (UniqueName: \"kubernetes.io/projected/7f4e6043-7a79-455d-97be-20aff374a38d-kube-api-access-ctwc9\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.921819 4853 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.921873 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:10 crc kubenswrapper[4853]: I0127 19:08:10.921923 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f4e6043-7a79-455d-97be-20aff374a38d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.289778 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" event={"ID":"7f4e6043-7a79-455d-97be-20aff374a38d","Type":"ContainerDied","Data":"10ee16ef75eec5412905549b1b1a3a02c66944928c1bcae430f88079d288f51e"} Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.289848 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10ee16ef75eec5412905549b1b1a3a02c66944928c1bcae430f88079d288f51e" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.289944 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.429307 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m"] Jan 27 19:08:11 crc kubenswrapper[4853]: E0127 19:08:11.429738 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4e6043-7a79-455d-97be-20aff374a38d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.429754 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4e6043-7a79-455d-97be-20aff374a38d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.429914 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4e6043-7a79-455d-97be-20aff374a38d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.430549 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.432569 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.432960 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.433896 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.441740 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m"] Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.441759 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.532837 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.533037 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.533066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxdw\" (UniqueName: \"kubernetes.io/projected/e4809563-3f03-4361-9794-87f5705115b8-kube-api-access-wsxdw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.635348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.635398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxdw\" (UniqueName: \"kubernetes.io/projected/e4809563-3f03-4361-9794-87f5705115b8-kube-api-access-wsxdw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.635470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.639089 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.639305 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.652787 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxdw\" (UniqueName: \"kubernetes.io/projected/e4809563-3f03-4361-9794-87f5705115b8-kube-api-access-wsxdw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-szg9m\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:11 crc kubenswrapper[4853]: I0127 19:08:11.756038 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:08:12 crc kubenswrapper[4853]: I0127 19:08:12.254231 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m"] Jan 27 19:08:12 crc kubenswrapper[4853]: I0127 19:08:12.298774 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" event={"ID":"e4809563-3f03-4361-9794-87f5705115b8","Type":"ContainerStarted","Data":"d190dcb4b4279d0100dbd868bd042b91d31f7e4f87a62e8b62060be26f520218"} Jan 27 19:08:13 crc kubenswrapper[4853]: I0127 19:08:13.313041 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" event={"ID":"e4809563-3f03-4361-9794-87f5705115b8","Type":"ContainerStarted","Data":"579fbe6fce2ff88da2f4e594aa9a2d25e52be349162cbee2e9c41b17bb45b5ad"} Jan 27 19:08:13 crc kubenswrapper[4853]: I0127 19:08:13.332060 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" podStartSLOduration=1.865702136 podStartE2EDuration="2.332042284s" podCreationTimestamp="2026-01-27 19:08:11 +0000 UTC" firstStartedPulling="2026-01-27 19:08:12.263307364 +0000 UTC m=+1534.725850247" lastFinishedPulling="2026-01-27 19:08:12.729647512 +0000 UTC m=+1535.192190395" observedRunningTime="2026-01-27 19:08:13.330796288 +0000 UTC m=+1535.793339191" watchObservedRunningTime="2026-01-27 19:08:13.332042284 +0000 UTC m=+1535.794585187" Jan 27 19:08:21 crc kubenswrapper[4853]: I0127 19:08:21.114175 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:08:21 crc kubenswrapper[4853]: E0127 19:08:21.115514 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.447763 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxzbn"] Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.451055 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.459054 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxzbn"] Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.518681 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-catalog-content\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.518839 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-utilities\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.518910 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/c9ca414b-3a80-4219-8b1f-e7781322e59f-kube-api-access-hvqfd\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.620212 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-utilities\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.620523 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/c9ca414b-3a80-4219-8b1f-e7781322e59f-kube-api-access-hvqfd\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.620711 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-catalog-content\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.620792 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-utilities\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.621068 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-catalog-content\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.675696 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/c9ca414b-3a80-4219-8b1f-e7781322e59f-kube-api-access-hvqfd\") pod \"certified-operators-fxzbn\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:30 crc kubenswrapper[4853]: I0127 19:08:30.770769 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:31 crc kubenswrapper[4853]: I0127 19:08:31.295766 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxzbn"] Jan 27 19:08:31 crc kubenswrapper[4853]: I0127 19:08:31.469131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxzbn" event={"ID":"c9ca414b-3a80-4219-8b1f-e7781322e59f","Type":"ContainerStarted","Data":"ee30a26f44ec20b60316f1fc0d2a6d1a1197374fda864f48599666dc266acb69"} Jan 27 19:08:32 crc kubenswrapper[4853]: I0127 19:08:32.481555 4853 generic.go:334] "Generic (PLEG): container finished" podID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerID="231c3a78d5e5192583eae0025515fdfb5c39802fa1bd56f5a143be367366351a" exitCode=0 Jan 27 19:08:32 crc kubenswrapper[4853]: I0127 19:08:32.481601 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxzbn" event={"ID":"c9ca414b-3a80-4219-8b1f-e7781322e59f","Type":"ContainerDied","Data":"231c3a78d5e5192583eae0025515fdfb5c39802fa1bd56f5a143be367366351a"} Jan 27 19:08:34 crc kubenswrapper[4853]: I0127 19:08:34.522241 4853 generic.go:334] "Generic (PLEG): container finished" podID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerID="3254a38fe43f8e795a99f0b3f2c32bf188b8cd0d97ecf4da69b002b7b0fffa74" exitCode=0 Jan 27 19:08:34 crc kubenswrapper[4853]: I0127 19:08:34.523244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxzbn" event={"ID":"c9ca414b-3a80-4219-8b1f-e7781322e59f","Type":"ContainerDied","Data":"3254a38fe43f8e795a99f0b3f2c32bf188b8cd0d97ecf4da69b002b7b0fffa74"} Jan 27 19:08:34 crc kubenswrapper[4853]: I0127 19:08:34.979240 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-92fwn"] Jan 27 19:08:34 crc kubenswrapper[4853]: I0127 19:08:34.981583 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:34.997442 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92fwn"] Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.112549 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:08:35 crc kubenswrapper[4853]: E0127 19:08:35.112938 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.133494 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pgz\" (UniqueName: \"kubernetes.io/projected/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-kube-api-access-l7pgz\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.133984 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-catalog-content\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.134023 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-utilities\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.235904 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-catalog-content\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.236008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-utilities\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.236168 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7pgz\" (UniqueName: \"kubernetes.io/projected/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-kube-api-access-l7pgz\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.237809 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-utilities\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.238253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-catalog-content\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.260047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7pgz\" (UniqueName: \"kubernetes.io/projected/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-kube-api-access-l7pgz\") pod \"community-operators-92fwn\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.320181 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.557280 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxzbn" event={"ID":"c9ca414b-3a80-4219-8b1f-e7781322e59f","Type":"ContainerStarted","Data":"abf47df06e92bd2b2bf084a93c5544cdcc272b5d643fde299727fe534d55e50f"} Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.593792 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxzbn" podStartSLOduration=3.06030382 podStartE2EDuration="5.593765932s" podCreationTimestamp="2026-01-27 19:08:30 +0000 UTC" firstStartedPulling="2026-01-27 19:08:32.486258925 +0000 UTC m=+1554.948801808" lastFinishedPulling="2026-01-27 19:08:35.019721037 +0000 UTC m=+1557.482263920" observedRunningTime="2026-01-27 19:08:35.580687847 +0000 UTC m=+1558.043230730" watchObservedRunningTime="2026-01-27 19:08:35.593765932 +0000 UTC m=+1558.056308815" Jan 27 19:08:35 crc kubenswrapper[4853]: I0127 19:08:35.901515 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92fwn"] Jan 27 19:08:35 crc kubenswrapper[4853]: W0127 19:08:35.903362 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc5cbb5_9fce_4fd7_a102_d0f42a81b0f5.slice/crio-72e08fbf6f8a5992bee9371e9d99ced94211def219eb804cab4888f940c1e7ca WatchSource:0}: Error finding container 72e08fbf6f8a5992bee9371e9d99ced94211def219eb804cab4888f940c1e7ca: Status 404 returned error can't find the container with id 72e08fbf6f8a5992bee9371e9d99ced94211def219eb804cab4888f940c1e7ca Jan 27 19:08:36 crc kubenswrapper[4853]: I0127 19:08:36.570643 4853 generic.go:334] "Generic (PLEG): container finished" podID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerID="9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b" exitCode=0 Jan 27 19:08:36 crc kubenswrapper[4853]: I0127 19:08:36.570802 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerDied","Data":"9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b"} Jan 27 19:08:36 crc kubenswrapper[4853]: I0127 19:08:36.571326 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerStarted","Data":"72e08fbf6f8a5992bee9371e9d99ced94211def219eb804cab4888f940c1e7ca"} Jan 27 19:08:37 crc kubenswrapper[4853]: I0127 19:08:37.584087 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerStarted","Data":"27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25"} Jan 27 19:08:38 crc kubenswrapper[4853]: I0127 19:08:38.595433 4853 generic.go:334] "Generic (PLEG): container finished" podID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerID="27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25" exitCode=0 Jan 27 19:08:38 crc kubenswrapper[4853]: I0127 19:08:38.595653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerDied","Data":"27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25"} Jan 27 19:08:39 crc kubenswrapper[4853]: I0127 19:08:39.607878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerStarted","Data":"da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88"} Jan 27 19:08:39 crc kubenswrapper[4853]: I0127 19:08:39.630357 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-92fwn" podStartSLOduration=2.8965424459999998 podStartE2EDuration="5.630330871s" podCreationTimestamp="2026-01-27 19:08:34 +0000 UTC" firstStartedPulling="2026-01-27 19:08:36.573991457 +0000 UTC m=+1559.036534340" lastFinishedPulling="2026-01-27 19:08:39.307779882 +0000 UTC m=+1561.770322765" observedRunningTime="2026-01-27 19:08:39.62718217 +0000 UTC m=+1562.089725073" watchObservedRunningTime="2026-01-27 19:08:39.630330871 +0000 UTC m=+1562.092873764" Jan 27 19:08:40 crc kubenswrapper[4853]: I0127 19:08:40.772299 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:40 crc kubenswrapper[4853]: I0127 19:08:40.772699 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:40 crc kubenswrapper[4853]: I0127 19:08:40.823196 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:41 crc kubenswrapper[4853]: I0127 19:08:41.668556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:41 crc kubenswrapper[4853]: I0127 19:08:41.972287 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxzbn"] Jan 27 19:08:43 crc kubenswrapper[4853]: I0127 19:08:43.641245 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxzbn" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="registry-server" containerID="cri-o://abf47df06e92bd2b2bf084a93c5544cdcc272b5d643fde299727fe534d55e50f" gracePeriod=2 Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.654884 4853 generic.go:334] "Generic (PLEG): container finished" podID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerID="abf47df06e92bd2b2bf084a93c5544cdcc272b5d643fde299727fe534d55e50f" exitCode=0 Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.654973 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxzbn" event={"ID":"c9ca414b-3a80-4219-8b1f-e7781322e59f","Type":"ContainerDied","Data":"abf47df06e92bd2b2bf084a93c5544cdcc272b5d643fde299727fe534d55e50f"} Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.655697 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxzbn" event={"ID":"c9ca414b-3a80-4219-8b1f-e7781322e59f","Type":"ContainerDied","Data":"ee30a26f44ec20b60316f1fc0d2a6d1a1197374fda864f48599666dc266acb69"} Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.655717 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee30a26f44ec20b60316f1fc0d2a6d1a1197374fda864f48599666dc266acb69" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.717973 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.892565 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/c9ca414b-3a80-4219-8b1f-e7781322e59f-kube-api-access-hvqfd\") pod \"c9ca414b-3a80-4219-8b1f-e7781322e59f\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.892704 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-utilities\") pod \"c9ca414b-3a80-4219-8b1f-e7781322e59f\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.892787 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-catalog-content\") pod \"c9ca414b-3a80-4219-8b1f-e7781322e59f\" (UID: \"c9ca414b-3a80-4219-8b1f-e7781322e59f\") " Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.893982 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-utilities" (OuterVolumeSpecName: "utilities") pod "c9ca414b-3a80-4219-8b1f-e7781322e59f" (UID: "c9ca414b-3a80-4219-8b1f-e7781322e59f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.899557 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ca414b-3a80-4219-8b1f-e7781322e59f-kube-api-access-hvqfd" (OuterVolumeSpecName: "kube-api-access-hvqfd") pod "c9ca414b-3a80-4219-8b1f-e7781322e59f" (UID: "c9ca414b-3a80-4219-8b1f-e7781322e59f"). InnerVolumeSpecName "kube-api-access-hvqfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.938599 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9ca414b-3a80-4219-8b1f-e7781322e59f" (UID: "c9ca414b-3a80-4219-8b1f-e7781322e59f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.995667 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.995711 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/c9ca414b-3a80-4219-8b1f-e7781322e59f-kube-api-access-hvqfd\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:44 crc kubenswrapper[4853]: I0127 19:08:44.995728 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ca414b-3a80-4219-8b1f-e7781322e59f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.320486 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.320961 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.386548 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.664332 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxzbn" Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.697502 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxzbn"] Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.709186 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxzbn"] Jan 27 19:08:45 crc kubenswrapper[4853]: I0127 19:08:45.726526 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:46 crc kubenswrapper[4853]: I0127 19:08:46.123238 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" path="/var/lib/kubelet/pods/c9ca414b-3a80-4219-8b1f-e7781322e59f/volumes" Jan 27 19:08:47 crc kubenswrapper[4853]: I0127 19:08:47.112319 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:08:47 crc kubenswrapper[4853]: E0127 19:08:47.112816 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:08:47 crc kubenswrapper[4853]: I0127 19:08:47.369862 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92fwn"] Jan 27 19:08:47 crc kubenswrapper[4853]: I0127 19:08:47.681488 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-92fwn" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="registry-server" containerID="cri-o://da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88" gracePeriod=2 Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.138202 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c6b2-account-create-update-56lhn"] Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.165338 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c6b2-account-create-update-56lhn"] Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.209191 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-chjgp"] Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.231025 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-chjgp"] Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.272331 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.378245 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-utilities\") pod \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.379260 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-utilities" (OuterVolumeSpecName: "utilities") pod "8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" (UID: "8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.379397 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7pgz\" (UniqueName: \"kubernetes.io/projected/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-kube-api-access-l7pgz\") pod \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.380767 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-catalog-content\") pod \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\" (UID: \"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5\") " Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.381577 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.389390 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-kube-api-access-l7pgz" (OuterVolumeSpecName: "kube-api-access-l7pgz") pod "8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" (UID: "8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5"). InnerVolumeSpecName "kube-api-access-l7pgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.436891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" (UID: "8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.484102 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7pgz\" (UniqueName: \"kubernetes.io/projected/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-kube-api-access-l7pgz\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.484157 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.693922 4853 generic.go:334] "Generic (PLEG): container finished" podID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerID="da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88" exitCode=0 Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.693977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerDied","Data":"da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88"} Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.694016 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92fwn" event={"ID":"8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5","Type":"ContainerDied","Data":"72e08fbf6f8a5992bee9371e9d99ced94211def219eb804cab4888f940c1e7ca"} Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.694040 4853 scope.go:117] "RemoveContainer" containerID="da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.694458 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92fwn" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.718529 4853 scope.go:117] "RemoveContainer" containerID="27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.734611 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92fwn"] Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.747075 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-92fwn"] Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.764561 4853 scope.go:117] "RemoveContainer" containerID="9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.804174 4853 scope.go:117] "RemoveContainer" containerID="da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88" Jan 27 19:08:48 crc kubenswrapper[4853]: E0127 19:08:48.804804 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88\": container with ID starting with da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88 not found: ID does not exist" containerID="da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.804904 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88"} err="failed to get container status \"da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88\": rpc error: code = NotFound desc = could not find container \"da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88\": container with ID starting with da24e1377c851ff57f87db5b5584c5960492712df5be0d260801b0f66d6b6d88 not found: ID does not exist" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.804947 4853 scope.go:117] "RemoveContainer" containerID="27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25" Jan 27 19:08:48 crc kubenswrapper[4853]: E0127 19:08:48.805505 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25\": container with ID starting with 27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25 not found: ID does not exist" containerID="27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.805533 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25"} err="failed to get container status \"27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25\": rpc error: code = NotFound desc = could not find container \"27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25\": container with ID starting with 27784d63f85e45c3f0f8a0eb27a91cfb3b614d23768eefb37e2ec4fa21886d25 not found: ID does not exist" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.805548 4853 scope.go:117] "RemoveContainer" containerID="9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b" Jan 27 19:08:48 crc kubenswrapper[4853]: E0127 19:08:48.806003 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b\": container with ID starting with 9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b not found: ID does not exist" containerID="9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b" Jan 27 19:08:48 crc kubenswrapper[4853]: I0127 19:08:48.806069 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b"} err="failed to get container status \"9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b\": rpc error: code = NotFound desc = could not find container \"9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b\": container with ID starting with 9beb64281c2dc152f41c1f7f0d171bcaaf2af44999cb7e539bd447a843ea5a0b not found: ID does not exist" Jan 27 19:08:50 crc kubenswrapper[4853]: I0127 19:08:50.123269 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd47d73-75f6-4d7b-92be-e6efc1a44297" path="/var/lib/kubelet/pods/3bd47d73-75f6-4d7b-92be-e6efc1a44297/volumes" Jan 27 19:08:50 crc kubenswrapper[4853]: I0127 19:08:50.123942 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2ce04c-5d13-464f-9018-29c34c1b5d35" path="/var/lib/kubelet/pods/6e2ce04c-5d13-464f-9018-29c34c1b5d35/volumes" Jan 27 19:08:50 crc kubenswrapper[4853]: I0127 19:08:50.124478 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" path="/var/lib/kubelet/pods/8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5/volumes" Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.034936 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4881-account-create-update-vqvqb"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.046084 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d00f-account-create-update-rrkdb"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.060975 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-t8t9z"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.077334 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rcgrf"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.097394 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d00f-account-create-update-rrkdb"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.110288 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4881-account-create-update-vqvqb"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.120643 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-t8t9z"] Jan 27 19:08:53 crc kubenswrapper[4853]: I0127 19:08:53.131100 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rcgrf"] Jan 27 19:08:54 crc kubenswrapper[4853]: I0127 19:08:54.124055 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829117d5-2c78-4874-bea6-5d66f13b1f39" path="/var/lib/kubelet/pods/829117d5-2c78-4874-bea6-5d66f13b1f39/volumes" Jan 27 19:08:54 crc kubenswrapper[4853]: I0127 19:08:54.124653 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9358d10-5bdb-4f99-96d1-907990452ad6" path="/var/lib/kubelet/pods/a9358d10-5bdb-4f99-96d1-907990452ad6/volumes" Jan 27 19:08:54 crc kubenswrapper[4853]: I0127 19:08:54.125215 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c82666-fbb7-47cd-9aa8-51fc2f3196cb" path="/var/lib/kubelet/pods/c3c82666-fbb7-47cd-9aa8-51fc2f3196cb/volumes" Jan 27 19:08:54 crc kubenswrapper[4853]: I0127 19:08:54.125755 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb347c8c-cafd-4b44-9862-d69103d33fb7" path="/var/lib/kubelet/pods/fb347c8c-cafd-4b44-9862-d69103d33fb7/volumes" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.832794 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6wgqn"] Jan 27 19:08:56 crc kubenswrapper[4853]: E0127 19:08:56.833591 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="registry-server" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833604 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="registry-server" Jan 27 19:08:56 crc kubenswrapper[4853]: E0127 19:08:56.833625 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="extract-content" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833631 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="extract-content" Jan 27 19:08:56 crc kubenswrapper[4853]: E0127 19:08:56.833648 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="extract-utilities" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833655 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="extract-utilities" Jan 27 19:08:56 crc kubenswrapper[4853]: E0127 19:08:56.833679 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="extract-utilities" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833684 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="extract-utilities" Jan 27 19:08:56 crc kubenswrapper[4853]: E0127 19:08:56.833709 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="extract-content" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833715 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="extract-content" Jan 27 19:08:56 crc kubenswrapper[4853]: E0127 19:08:56.833724 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="registry-server" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833731 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="registry-server" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833914 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ca414b-3a80-4219-8b1f-e7781322e59f" containerName="registry-server" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.833930 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc5cbb5-9fce-4fd7-a102-d0f42a81b0f5" containerName="registry-server" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.835398 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.868348 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wgqn"] Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.939536 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-utilities\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.940344 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rv96\" (UniqueName: \"kubernetes.io/projected/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-kube-api-access-5rv96\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:56 crc kubenswrapper[4853]: I0127 19:08:56.940477 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-catalog-content\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.042908 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rv96\" (UniqueName: \"kubernetes.io/projected/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-kube-api-access-5rv96\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.043014 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-catalog-content\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.043166 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-utilities\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.043517 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-catalog-content\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.043611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-utilities\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.063463 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rv96\" (UniqueName: \"kubernetes.io/projected/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-kube-api-access-5rv96\") pod \"redhat-marketplace-6wgqn\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.175405 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.659254 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wgqn"] Jan 27 19:08:57 crc kubenswrapper[4853]: I0127 19:08:57.784663 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wgqn" event={"ID":"8cccb8de-6a44-4cc2-b805-b5c9a04732d5","Type":"ContainerStarted","Data":"03cc3928d454be764b324e194df924c5a1ae9262071f56041d19518dddea804f"} Jan 27 19:08:58 crc kubenswrapper[4853]: I0127 19:08:58.799430 4853 generic.go:334] "Generic (PLEG): container finished" podID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerID="a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c" exitCode=0 Jan 27 19:08:58 crc kubenswrapper[4853]: I0127 19:08:58.800241 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wgqn" event={"ID":"8cccb8de-6a44-4cc2-b805-b5c9a04732d5","Type":"ContainerDied","Data":"a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c"} Jan 27 19:09:00 crc kubenswrapper[4853]: I0127 19:09:00.818157 4853 generic.go:334] "Generic (PLEG): container finished" podID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerID="afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199" exitCode=0 Jan 27 19:09:00 crc kubenswrapper[4853]: I0127 19:09:00.818277 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wgqn" event={"ID":"8cccb8de-6a44-4cc2-b805-b5c9a04732d5","Type":"ContainerDied","Data":"afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199"} Jan 27 19:09:01 crc kubenswrapper[4853]: I0127 19:09:01.112754 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:09:01 crc kubenswrapper[4853]: E0127 19:09:01.113044 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:09:01 crc kubenswrapper[4853]: I0127 19:09:01.831290 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wgqn" event={"ID":"8cccb8de-6a44-4cc2-b805-b5c9a04732d5","Type":"ContainerStarted","Data":"0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c"} Jan 27 19:09:01 crc kubenswrapper[4853]: I0127 19:09:01.853266 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6wgqn" podStartSLOduration=3.410814311 podStartE2EDuration="5.853248824s" podCreationTimestamp="2026-01-27 19:08:56 +0000 UTC" firstStartedPulling="2026-01-27 19:08:58.801910594 +0000 UTC m=+1581.264453467" lastFinishedPulling="2026-01-27 19:09:01.244345097 +0000 UTC m=+1583.706887980" observedRunningTime="2026-01-27 19:09:01.849794614 +0000 UTC m=+1584.312337497" watchObservedRunningTime="2026-01-27 19:09:01.853248824 +0000 UTC m=+1584.315791707" Jan 27 19:09:07 crc kubenswrapper[4853]: I0127 19:09:07.176443 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:09:07 crc kubenswrapper[4853]: I0127 19:09:07.177093 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:09:07 crc kubenswrapper[4853]: I0127 19:09:07.225395 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:09:07 crc kubenswrapper[4853]: I0127 19:09:07.935517 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:09:07 crc kubenswrapper[4853]: I0127 19:09:07.981740 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wgqn"] Jan 27 19:09:09 crc kubenswrapper[4853]: I0127 19:09:09.915841 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6wgqn" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="registry-server" containerID="cri-o://0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c" gracePeriod=2 Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.401679 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.514588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-utilities\") pod \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.514726 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rv96\" (UniqueName: \"kubernetes.io/projected/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-kube-api-access-5rv96\") pod \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.514832 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-catalog-content\") pod \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\" (UID: \"8cccb8de-6a44-4cc2-b805-b5c9a04732d5\") " Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.516977 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-utilities" (OuterVolumeSpecName: "utilities") pod "8cccb8de-6a44-4cc2-b805-b5c9a04732d5" (UID: "8cccb8de-6a44-4cc2-b805-b5c9a04732d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.524114 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-kube-api-access-5rv96" (OuterVolumeSpecName: "kube-api-access-5rv96") pod "8cccb8de-6a44-4cc2-b805-b5c9a04732d5" (UID: "8cccb8de-6a44-4cc2-b805-b5c9a04732d5"). InnerVolumeSpecName "kube-api-access-5rv96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.537426 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cccb8de-6a44-4cc2-b805-b5c9a04732d5" (UID: "8cccb8de-6a44-4cc2-b805-b5c9a04732d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.617356 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.617413 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rv96\" (UniqueName: \"kubernetes.io/projected/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-kube-api-access-5rv96\") on node \"crc\" DevicePath \"\"" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.617434 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cccb8de-6a44-4cc2-b805-b5c9a04732d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.931296 4853 generic.go:334] "Generic (PLEG): container finished" podID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerID="0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c" exitCode=0 Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.931362 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wgqn" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.931402 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wgqn" event={"ID":"8cccb8de-6a44-4cc2-b805-b5c9a04732d5","Type":"ContainerDied","Data":"0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c"} Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.931469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wgqn" event={"ID":"8cccb8de-6a44-4cc2-b805-b5c9a04732d5","Type":"ContainerDied","Data":"03cc3928d454be764b324e194df924c5a1ae9262071f56041d19518dddea804f"} Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.931509 4853 scope.go:117] "RemoveContainer" containerID="0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.970367 4853 scope.go:117] "RemoveContainer" containerID="afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199" Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.974029 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wgqn"] Jan 27 19:09:10 crc kubenswrapper[4853]: I0127 19:09:10.993554 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wgqn"] Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.014842 4853 scope.go:117] "RemoveContainer" containerID="a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c" Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.063865 4853 scope.go:117] "RemoveContainer" containerID="0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c" Jan 27 19:09:11 crc kubenswrapper[4853]: E0127 19:09:11.065163 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c\": container with ID starting with 0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c not found: ID does not exist" containerID="0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c" Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.065222 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c"} err="failed to get container status \"0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c\": rpc error: code = NotFound desc = could not find container \"0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c\": container with ID starting with 0ca1b863ca68d756a0fcebb14ba369f65f3f638e89859d332bdd288500d5fc9c not found: ID does not exist" Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.065267 4853 scope.go:117] "RemoveContainer" containerID="afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199" Jan 27 19:09:11 crc kubenswrapper[4853]: E0127 19:09:11.065651 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199\": container with ID starting with afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199 not found: ID does not exist" containerID="afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199" Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.065697 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199"} err="failed to get container status \"afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199\": rpc error: code = NotFound desc = could not find container \"afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199\": container with ID starting with afba3c29a74f3ac54b0be1bee9cb55b14bf3d2c89f33f316c6fdb0eec3b10199 not found: ID does not exist" Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.065720 4853 scope.go:117] "RemoveContainer" containerID="a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c" Jan 27 19:09:11 crc kubenswrapper[4853]: E0127 19:09:11.065973 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c\": container with ID starting with a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c not found: ID does not exist" containerID="a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c" Jan 27 19:09:11 crc kubenswrapper[4853]: I0127 19:09:11.066028 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c"} err="failed to get container status \"a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c\": rpc error: code = NotFound desc = could not find container \"a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c\": container with ID starting with a6684950e27eb12db971c4927ec348e57f9ab8936904705b260dba80ce986e9c not found: ID does not exist" Jan 27 19:09:12 crc kubenswrapper[4853]: I0127 19:09:12.030848 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-djxcp"] Jan 27 19:09:12 crc kubenswrapper[4853]: I0127 19:09:12.041294 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-djxcp"] Jan 27 19:09:12 crc kubenswrapper[4853]: I0127 19:09:12.123303 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" path="/var/lib/kubelet/pods/8cccb8de-6a44-4cc2-b805-b5c9a04732d5/volumes" Jan 27 19:09:12 crc kubenswrapper[4853]: I0127 19:09:12.124052 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1fd930-d712-4e54-be4d-2a30c3c7436d" path="/var/lib/kubelet/pods/bb1fd930-d712-4e54-be4d-2a30c3c7436d/volumes" Jan 27 19:09:16 crc kubenswrapper[4853]: I0127 19:09:16.114197 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:09:16 crc kubenswrapper[4853]: E0127 19:09:16.115551 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:09:20 crc kubenswrapper[4853]: I0127 19:09:20.889497 4853 scope.go:117] "RemoveContainer" containerID="16c3663a978e5477c1fc0d929c63a993c9707b0c6ff1d9fa7ecb30fbb2a733dd" Jan 27 19:09:20 crc kubenswrapper[4853]: I0127 19:09:20.923602 4853 scope.go:117] "RemoveContainer" containerID="c4b899717fd995b71b389851a51450ee1094f74a599172e789a19798e9ceb25c" Jan 27 19:09:20 crc kubenswrapper[4853]: I0127 19:09:20.976497 4853 scope.go:117] "RemoveContainer" containerID="89a451e237295d0aa9c4883b984f339bb349533a253e48a3437f57c72447cfe1" Jan 27 19:09:21 crc kubenswrapper[4853]: I0127 19:09:21.012690 4853 scope.go:117] "RemoveContainer" containerID="053d14de0c4cdbebf2f1ff2859a14c8ff7183e2a07ace1835f0e55a6aa02ec1a" Jan 27 19:09:21 crc kubenswrapper[4853]: I0127 19:09:21.044015 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bpr58"] Jan 27 19:09:21 crc kubenswrapper[4853]: I0127 19:09:21.064790 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bpr58"] Jan 27 19:09:21 crc kubenswrapper[4853]: I0127 19:09:21.102819 4853 scope.go:117] "RemoveContainer" containerID="3d97c0b027163cf31d180c7276e19f5ec785eed80a0b1c800115600da12685f8" Jan 27 19:09:21 crc kubenswrapper[4853]: I0127 19:09:21.137312 4853 scope.go:117] "RemoveContainer" containerID="d8f4b99d43a1c4307efc38e93556f351c97343e0d7d996e6e801ac943bc4d921" Jan 27 19:09:21 crc kubenswrapper[4853]: I0127 19:09:21.171795 4853 scope.go:117] "RemoveContainer" containerID="f5d8c9f5ef1ab9dba0ebbc08a6ce55683e77f8a727ceaf1c22f32497f98a9c62" Jan 27 19:09:22 crc kubenswrapper[4853]: I0127 19:09:22.124202 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1715d18d-b411-407d-9b52-d7b0bbd850f4" path="/var/lib/kubelet/pods/1715d18d-b411-407d-9b52-d7b0bbd850f4/volumes" Jan 27 19:09:28 crc kubenswrapper[4853]: I0127 19:09:28.119001 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:09:28 crc kubenswrapper[4853]: E0127 19:09:28.119903 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.040543 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3de5-account-create-update-dnzb9"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.055147 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9z979"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.065856 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-tnf7r"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.075007 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-74qwr"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.083221 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-219d-account-create-update-5qm8g"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.092376 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8c34-account-create-update-mrws4"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.102860 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9z979"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.113674 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-74qwr"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.133625 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-tnf7r"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.133880 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3de5-account-create-update-dnzb9"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.164178 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-219d-account-create-update-5qm8g"] Jan 27 19:09:31 crc kubenswrapper[4853]: I0127 19:09:31.179387 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8c34-account-create-update-mrws4"] Jan 27 19:09:32 crc kubenswrapper[4853]: I0127 19:09:32.124680 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15035d73-a434-4ff6-9ec0-ecdf17c78d5d" path="/var/lib/kubelet/pods/15035d73-a434-4ff6-9ec0-ecdf17c78d5d/volumes" Jan 27 19:09:32 crc kubenswrapper[4853]: I0127 19:09:32.125581 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d31d088-3d59-4f93-bd69-5c656de66500" path="/var/lib/kubelet/pods/3d31d088-3d59-4f93-bd69-5c656de66500/volumes" Jan 27 19:09:32 crc kubenswrapper[4853]: I0127 19:09:32.126444 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f709328-85ea-42f5-8d5b-d302b907bee3" path="/var/lib/kubelet/pods/8f709328-85ea-42f5-8d5b-d302b907bee3/volumes" Jan 27 19:09:32 crc kubenswrapper[4853]: I0127 19:09:32.127230 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0f2528-c8dc-4170-b594-94945759de99" path="/var/lib/kubelet/pods/ca0f2528-c8dc-4170-b594-94945759de99/volumes" Jan 27 19:09:32 crc kubenswrapper[4853]: I0127 19:09:32.127748 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce113b00-48db-4a7d-b15a-fafe0a932311" path="/var/lib/kubelet/pods/ce113b00-48db-4a7d-b15a-fafe0a932311/volumes" Jan 27 19:09:32 crc kubenswrapper[4853]: I0127 19:09:32.128747 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed915364-aeae-47d1-9cc2-b9e14ce361e7" path="/var/lib/kubelet/pods/ed915364-aeae-47d1-9cc2-b9e14ce361e7/volumes" Jan 27 19:09:36 crc kubenswrapper[4853]: I0127 19:09:36.032050 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2pwl2"] Jan 27 19:09:36 crc kubenswrapper[4853]: I0127 19:09:36.040331 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2pwl2"] Jan 27 19:09:36 crc kubenswrapper[4853]: I0127 19:09:36.132780 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d107577-39f6-4463-80b0-374fc14e89e8" path="/var/lib/kubelet/pods/0d107577-39f6-4463-80b0-374fc14e89e8/volumes" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.615052 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qfpzj"] Jan 27 19:09:37 crc kubenswrapper[4853]: E0127 19:09:37.615627 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="extract-utilities" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.615643 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="extract-utilities" Jan 27 19:09:37 crc kubenswrapper[4853]: E0127 19:09:37.615654 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="registry-server" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.615660 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="registry-server" Jan 27 19:09:37 crc kubenswrapper[4853]: E0127 19:09:37.615674 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="extract-content" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.615679 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="extract-content" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.615892 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cccb8de-6a44-4cc2-b805-b5c9a04732d5" containerName="registry-server" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.617378 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.635319 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfpzj"] Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.699846 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-utilities\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.700081 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-catalog-content\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.700269 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tsdg\" (UniqueName: \"kubernetes.io/projected/1b71cb17-4465-411d-8183-52a15577dbec-kube-api-access-4tsdg\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.801624 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-utilities\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.801794 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-catalog-content\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.801840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tsdg\" (UniqueName: \"kubernetes.io/projected/1b71cb17-4465-411d-8183-52a15577dbec-kube-api-access-4tsdg\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.802205 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-utilities\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.802305 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-catalog-content\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.823392 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tsdg\" (UniqueName: \"kubernetes.io/projected/1b71cb17-4465-411d-8183-52a15577dbec-kube-api-access-4tsdg\") pod \"redhat-operators-qfpzj\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:37 crc kubenswrapper[4853]: I0127 19:09:37.955539 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:38 crc kubenswrapper[4853]: I0127 19:09:38.443991 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfpzj"] Jan 27 19:09:39 crc kubenswrapper[4853]: I0127 19:09:39.258517 4853 generic.go:334] "Generic (PLEG): container finished" podID="1b71cb17-4465-411d-8183-52a15577dbec" containerID="d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680" exitCode=0 Jan 27 19:09:39 crc kubenswrapper[4853]: I0127 19:09:39.258652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerDied","Data":"d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680"} Jan 27 19:09:39 crc kubenswrapper[4853]: I0127 19:09:39.259036 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerStarted","Data":"af572791ff0b528640fb178e21700ea9e94e90747a20b3ef359bfad120c4c6b5"} Jan 27 19:09:39 crc kubenswrapper[4853]: I0127 19:09:39.262767 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:09:40 crc kubenswrapper[4853]: I0127 19:09:40.269979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerStarted","Data":"ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347"} Jan 27 19:09:41 crc kubenswrapper[4853]: I0127 19:09:41.282137 4853 generic.go:334] "Generic (PLEG): container finished" podID="1b71cb17-4465-411d-8183-52a15577dbec" containerID="ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347" exitCode=0 Jan 27 19:09:41 crc kubenswrapper[4853]: I0127 19:09:41.282260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerDied","Data":"ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347"} Jan 27 19:09:42 crc kubenswrapper[4853]: I0127 19:09:42.113073 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:09:42 crc kubenswrapper[4853]: E0127 19:09:42.113805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:09:42 crc kubenswrapper[4853]: I0127 19:09:42.295952 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerStarted","Data":"daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a"} Jan 27 19:09:42 crc kubenswrapper[4853]: I0127 19:09:42.334003 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qfpzj" podStartSLOduration=2.883326145 podStartE2EDuration="5.333972873s" podCreationTimestamp="2026-01-27 19:09:37 +0000 UTC" firstStartedPulling="2026-01-27 19:09:39.262405483 +0000 UTC m=+1621.724948366" lastFinishedPulling="2026-01-27 19:09:41.713052211 +0000 UTC m=+1624.175595094" observedRunningTime="2026-01-27 19:09:42.32689617 +0000 UTC m=+1624.789439053" watchObservedRunningTime="2026-01-27 19:09:42.333972873 +0000 UTC m=+1624.796515756" Jan 27 19:09:47 crc kubenswrapper[4853]: I0127 19:09:47.956293 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:47 crc kubenswrapper[4853]: I0127 19:09:47.956887 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:48 crc kubenswrapper[4853]: I0127 19:09:48.001756 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:48 crc kubenswrapper[4853]: I0127 19:09:48.385917 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:48 crc kubenswrapper[4853]: I0127 19:09:48.428301 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfpzj"] Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.362153 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qfpzj" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="registry-server" containerID="cri-o://daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a" gracePeriod=2 Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.874787 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.972521 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-utilities\") pod \"1b71cb17-4465-411d-8183-52a15577dbec\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.972691 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-catalog-content\") pod \"1b71cb17-4465-411d-8183-52a15577dbec\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.972769 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tsdg\" (UniqueName: \"kubernetes.io/projected/1b71cb17-4465-411d-8183-52a15577dbec-kube-api-access-4tsdg\") pod \"1b71cb17-4465-411d-8183-52a15577dbec\" (UID: \"1b71cb17-4465-411d-8183-52a15577dbec\") " Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.973915 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-utilities" (OuterVolumeSpecName: "utilities") pod "1b71cb17-4465-411d-8183-52a15577dbec" (UID: "1b71cb17-4465-411d-8183-52a15577dbec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:09:50 crc kubenswrapper[4853]: I0127 19:09:50.983379 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b71cb17-4465-411d-8183-52a15577dbec-kube-api-access-4tsdg" (OuterVolumeSpecName: "kube-api-access-4tsdg") pod "1b71cb17-4465-411d-8183-52a15577dbec" (UID: "1b71cb17-4465-411d-8183-52a15577dbec"). InnerVolumeSpecName "kube-api-access-4tsdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.075360 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tsdg\" (UniqueName: \"kubernetes.io/projected/1b71cb17-4465-411d-8183-52a15577dbec-kube-api-access-4tsdg\") on node \"crc\" DevicePath \"\"" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.075400 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.102115 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b71cb17-4465-411d-8183-52a15577dbec" (UID: "1b71cb17-4465-411d-8183-52a15577dbec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.177402 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b71cb17-4465-411d-8183-52a15577dbec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.372184 4853 generic.go:334] "Generic (PLEG): container finished" podID="1b71cb17-4465-411d-8183-52a15577dbec" containerID="daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a" exitCode=0 Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.372206 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfpzj" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.372247 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerDied","Data":"daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a"} Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.372311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfpzj" event={"ID":"1b71cb17-4465-411d-8183-52a15577dbec","Type":"ContainerDied","Data":"af572791ff0b528640fb178e21700ea9e94e90747a20b3ef359bfad120c4c6b5"} Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.372340 4853 scope.go:117] "RemoveContainer" containerID="daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.399454 4853 scope.go:117] "RemoveContainer" containerID="ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.409691 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfpzj"] Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.422096 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qfpzj"] Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.429789 4853 scope.go:117] "RemoveContainer" containerID="d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.469274 4853 scope.go:117] "RemoveContainer" containerID="daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a" Jan 27 19:09:51 crc kubenswrapper[4853]: E0127 19:09:51.471234 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a\": container with ID starting with daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a not found: ID does not exist" containerID="daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.471349 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a"} err="failed to get container status \"daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a\": rpc error: code = NotFound desc = could not find container \"daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a\": container with ID starting with daef601c0664e338a868bdeba8515ce149793e2e4a5fb8939326f02bf2250f1a not found: ID does not exist" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.471426 4853 scope.go:117] "RemoveContainer" containerID="ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347" Jan 27 19:09:51 crc kubenswrapper[4853]: E0127 19:09:51.471821 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347\": container with ID starting with ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347 not found: ID does not exist" containerID="ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.471850 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347"} err="failed to get container status \"ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347\": rpc error: code = NotFound desc = could not find container \"ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347\": container with ID starting with ae52c4cc948f1ee8fd82bfc35b894a6d8a14dc0f14b78486909c2a7e11d49347 not found: ID does not exist" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.471871 4853 scope.go:117] "RemoveContainer" containerID="d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680" Jan 27 19:09:51 crc kubenswrapper[4853]: E0127 19:09:51.472194 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680\": container with ID starting with d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680 not found: ID does not exist" containerID="d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680" Jan 27 19:09:51 crc kubenswrapper[4853]: I0127 19:09:51.472248 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680"} err="failed to get container status \"d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680\": rpc error: code = NotFound desc = could not find container \"d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680\": container with ID starting with d028c1c6b28a97ef3864e1bd2f1d574d1ca05179855ccd5859b1b5c796d92680 not found: ID does not exist" Jan 27 19:09:52 crc kubenswrapper[4853]: I0127 19:09:52.123673 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b71cb17-4465-411d-8183-52a15577dbec" path="/var/lib/kubelet/pods/1b71cb17-4465-411d-8183-52a15577dbec/volumes" Jan 27 19:09:55 crc kubenswrapper[4853]: I0127 19:09:55.112514 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:09:55 crc kubenswrapper[4853]: E0127 19:09:55.113073 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:09:58 crc kubenswrapper[4853]: I0127 19:09:58.434854 4853 generic.go:334] "Generic (PLEG): container finished" podID="e4809563-3f03-4361-9794-87f5705115b8" containerID="579fbe6fce2ff88da2f4e594aa9a2d25e52be349162cbee2e9c41b17bb45b5ad" exitCode=0 Jan 27 19:09:58 crc kubenswrapper[4853]: I0127 19:09:58.434942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" event={"ID":"e4809563-3f03-4361-9794-87f5705115b8","Type":"ContainerDied","Data":"579fbe6fce2ff88da2f4e594aa9a2d25e52be349162cbee2e9c41b17bb45b5ad"} Jan 27 19:09:59 crc kubenswrapper[4853]: I0127 19:09:59.953627 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.052746 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-inventory\") pod \"e4809563-3f03-4361-9794-87f5705115b8\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.052854 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-ssh-key-openstack-edpm-ipam\") pod \"e4809563-3f03-4361-9794-87f5705115b8\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.052896 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsxdw\" (UniqueName: \"kubernetes.io/projected/e4809563-3f03-4361-9794-87f5705115b8-kube-api-access-wsxdw\") pod \"e4809563-3f03-4361-9794-87f5705115b8\" (UID: \"e4809563-3f03-4361-9794-87f5705115b8\") " Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.060418 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4809563-3f03-4361-9794-87f5705115b8-kube-api-access-wsxdw" (OuterVolumeSpecName: "kube-api-access-wsxdw") pod "e4809563-3f03-4361-9794-87f5705115b8" (UID: "e4809563-3f03-4361-9794-87f5705115b8"). InnerVolumeSpecName "kube-api-access-wsxdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.083143 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-inventory" (OuterVolumeSpecName: "inventory") pod "e4809563-3f03-4361-9794-87f5705115b8" (UID: "e4809563-3f03-4361-9794-87f5705115b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.083161 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4809563-3f03-4361-9794-87f5705115b8" (UID: "e4809563-3f03-4361-9794-87f5705115b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.155324 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.155358 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsxdw\" (UniqueName: \"kubernetes.io/projected/e4809563-3f03-4361-9794-87f5705115b8-kube-api-access-wsxdw\") on node \"crc\" DevicePath \"\"" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.155368 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4809563-3f03-4361-9794-87f5705115b8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.454448 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" event={"ID":"e4809563-3f03-4361-9794-87f5705115b8","Type":"ContainerDied","Data":"d190dcb4b4279d0100dbd868bd042b91d31f7e4f87a62e8b62060be26f520218"} Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.454500 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d190dcb4b4279d0100dbd868bd042b91d31f7e4f87a62e8b62060be26f520218" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.454514 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-szg9m" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.548517 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd"] Jan 27 19:10:00 crc kubenswrapper[4853]: E0127 19:10:00.548941 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="extract-utilities" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.548961 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="extract-utilities" Jan 27 19:10:00 crc kubenswrapper[4853]: E0127 19:10:00.548981 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="registry-server" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.548988 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="registry-server" Jan 27 19:10:00 crc kubenswrapper[4853]: E0127 19:10:00.548996 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4809563-3f03-4361-9794-87f5705115b8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.549003 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4809563-3f03-4361-9794-87f5705115b8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 19:10:00 crc kubenswrapper[4853]: E0127 19:10:00.549017 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="extract-content" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.549024 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="extract-content" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.549228 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4809563-3f03-4361-9794-87f5705115b8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.549259 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b71cb17-4465-411d-8183-52a15577dbec" containerName="registry-server" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.549872 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.554570 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.554626 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.554799 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.554826 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.577431 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd"] Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.665377 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnnk\" (UniqueName: \"kubernetes.io/projected/b48e7be3-8341-4d63-bb9e-3b665b27591b-kube-api-access-gfnnk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.665435 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.665661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.767695 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.767830 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnnk\" (UniqueName: \"kubernetes.io/projected/b48e7be3-8341-4d63-bb9e-3b665b27591b-kube-api-access-gfnnk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.767865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.778025 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.778026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.787723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnnk\" (UniqueName: \"kubernetes.io/projected/b48e7be3-8341-4d63-bb9e-3b665b27591b-kube-api-access-gfnnk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t84cd\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:00 crc kubenswrapper[4853]: I0127 19:10:00.875416 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:10:01 crc kubenswrapper[4853]: I0127 19:10:01.434183 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd"] Jan 27 19:10:01 crc kubenswrapper[4853]: I0127 19:10:01.464686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" event={"ID":"b48e7be3-8341-4d63-bb9e-3b665b27591b","Type":"ContainerStarted","Data":"0a2542a9f05e4d2b84574083fdf05b00994ccf5dff1836be76f6df193e720aa6"} Jan 27 19:10:02 crc kubenswrapper[4853]: I0127 19:10:02.474300 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" event={"ID":"b48e7be3-8341-4d63-bb9e-3b665b27591b","Type":"ContainerStarted","Data":"3f3a648a32c865c7cf06fad75870555d0bff6db1202127c29a7bd5a639f36767"} Jan 27 19:10:02 crc kubenswrapper[4853]: I0127 19:10:02.507466 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" podStartSLOduration=2.044492386 podStartE2EDuration="2.507445894s" podCreationTimestamp="2026-01-27 19:10:00 +0000 UTC" firstStartedPulling="2026-01-27 19:10:01.443938319 +0000 UTC m=+1643.906481202" lastFinishedPulling="2026-01-27 19:10:01.906891827 +0000 UTC m=+1644.369434710" observedRunningTime="2026-01-27 19:10:02.487854362 +0000 UTC m=+1644.950397235" watchObservedRunningTime="2026-01-27 19:10:02.507445894 +0000 UTC m=+1644.969988777" Jan 27 19:10:10 crc kubenswrapper[4853]: I0127 19:10:10.112895 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:10:10 crc kubenswrapper[4853]: E0127 19:10:10.113880 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:10:12 crc kubenswrapper[4853]: I0127 19:10:12.046249 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9bclj"] Jan 27 19:10:12 crc kubenswrapper[4853]: I0127 19:10:12.056083 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9bclj"] Jan 27 19:10:12 crc kubenswrapper[4853]: I0127 19:10:12.123442 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e865945-20c8-4b2d-a52b-62dd1450181b" path="/var/lib/kubelet/pods/8e865945-20c8-4b2d-a52b-62dd1450181b/volumes" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.113026 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:10:21 crc kubenswrapper[4853]: E0127 19:10:21.114487 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.394421 4853 scope.go:117] "RemoveContainer" containerID="e27540d77a4365a2d956743cf62b9896a8be95c214116ad9409a586b4f68e375" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.416839 4853 scope.go:117] "RemoveContainer" containerID="72f8704515accc7e94f5ee597aea700d0822e5bfa077b1dfd32254b94ad59eac" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.487169 4853 scope.go:117] "RemoveContainer" containerID="7977e4339b32d6f810f532e9e707fe5b75af47a06d4dfa33f7dba4b304dc2ccb" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.522418 4853 scope.go:117] "RemoveContainer" containerID="e7ef370f5c0148d0a484beb9a0a87fcffadf5e7c6eee89e34ce7f539d156a007" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.577066 4853 scope.go:117] "RemoveContainer" containerID="324972faa9f884817eb56c41d9903a1f462a8f87b8c7bdc2c90efba6f4a3ca77" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.600389 4853 scope.go:117] "RemoveContainer" containerID="a61ddbb1693f2925428b27552e31b531d897249ecdc1762a39ad6bbc9f380d5d" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.643472 4853 scope.go:117] "RemoveContainer" containerID="d0e7ae2912174648fc97eb8d5d5d8b251b773709670a0698a55f108ee079f000" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.679394 4853 scope.go:117] "RemoveContainer" containerID="a0a9992d6dddcf961c34e75bb4eb48ef1eda0ccbd08b79240ae3c8208b4b7c55" Jan 27 19:10:21 crc kubenswrapper[4853]: I0127 19:10:21.701350 4853 scope.go:117] "RemoveContainer" containerID="ab2dfdc4e62727303feef23135af719101525667d4c3d0f07869b9d505457de4" Jan 27 19:10:24 crc kubenswrapper[4853]: I0127 19:10:24.048709 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nzrdc"] Jan 27 19:10:24 crc kubenswrapper[4853]: I0127 19:10:24.057892 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nzrdc"] Jan 27 19:10:24 crc kubenswrapper[4853]: I0127 19:10:24.124072 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb20c48-23bc-4c0d-92de-f87015fac932" path="/var/lib/kubelet/pods/3bb20c48-23bc-4c0d-92de-f87015fac932/volumes" Jan 27 19:10:29 crc kubenswrapper[4853]: I0127 19:10:29.032085 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-928cv"] Jan 27 19:10:29 crc kubenswrapper[4853]: I0127 19:10:29.042921 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-b7gbn"] Jan 27 19:10:29 crc kubenswrapper[4853]: I0127 19:10:29.054227 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-928cv"] Jan 27 19:10:29 crc kubenswrapper[4853]: I0127 19:10:29.062688 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-b7gbn"] Jan 27 19:10:30 crc kubenswrapper[4853]: I0127 19:10:30.123628 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae89dc3-4a08-42bd-a234-b5e8f948dc23" path="/var/lib/kubelet/pods/8ae89dc3-4a08-42bd-a234-b5e8f948dc23/volumes" Jan 27 19:10:30 crc kubenswrapper[4853]: I0127 19:10:30.124873 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0dddcf5-0747-4132-b14f-f67160ca5f27" path="/var/lib/kubelet/pods/e0dddcf5-0747-4132-b14f-f67160ca5f27/volumes" Jan 27 19:10:36 crc kubenswrapper[4853]: I0127 19:10:36.112543 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:10:36 crc kubenswrapper[4853]: E0127 19:10:36.113341 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:10:47 crc kubenswrapper[4853]: I0127 19:10:47.112904 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:10:47 crc kubenswrapper[4853]: E0127 19:10:47.113620 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:10:49 crc kubenswrapper[4853]: I0127 19:10:49.046336 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rfjdk"] Jan 27 19:10:49 crc kubenswrapper[4853]: I0127 19:10:49.058413 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rfjdk"] Jan 27 19:10:50 crc kubenswrapper[4853]: I0127 19:10:50.125562 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d33900-476d-4c86-a501-4490c01000ca" path="/var/lib/kubelet/pods/b1d33900-476d-4c86-a501-4490c01000ca/volumes" Jan 27 19:11:01 crc kubenswrapper[4853]: I0127 19:11:01.112534 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:11:01 crc kubenswrapper[4853]: E0127 19:11:01.113221 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:11:14 crc kubenswrapper[4853]: I0127 19:11:14.112416 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:11:14 crc kubenswrapper[4853]: E0127 19:11:14.113733 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:11:17 crc kubenswrapper[4853]: I0127 19:11:17.135203 4853 generic.go:334] "Generic (PLEG): container finished" podID="b48e7be3-8341-4d63-bb9e-3b665b27591b" containerID="3f3a648a32c865c7cf06fad75870555d0bff6db1202127c29a7bd5a639f36767" exitCode=0 Jan 27 19:11:17 crc kubenswrapper[4853]: I0127 19:11:17.135283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" event={"ID":"b48e7be3-8341-4d63-bb9e-3b665b27591b","Type":"ContainerDied","Data":"3f3a648a32c865c7cf06fad75870555d0bff6db1202127c29a7bd5a639f36767"} Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.555916 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.685930 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-ssh-key-openstack-edpm-ipam\") pod \"b48e7be3-8341-4d63-bb9e-3b665b27591b\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.686043 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnnk\" (UniqueName: \"kubernetes.io/projected/b48e7be3-8341-4d63-bb9e-3b665b27591b-kube-api-access-gfnnk\") pod \"b48e7be3-8341-4d63-bb9e-3b665b27591b\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.686237 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-inventory\") pod \"b48e7be3-8341-4d63-bb9e-3b665b27591b\" (UID: \"b48e7be3-8341-4d63-bb9e-3b665b27591b\") " Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.705519 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48e7be3-8341-4d63-bb9e-3b665b27591b-kube-api-access-gfnnk" (OuterVolumeSpecName: "kube-api-access-gfnnk") pod "b48e7be3-8341-4d63-bb9e-3b665b27591b" (UID: "b48e7be3-8341-4d63-bb9e-3b665b27591b"). InnerVolumeSpecName "kube-api-access-gfnnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.751175 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b48e7be3-8341-4d63-bb9e-3b665b27591b" (UID: "b48e7be3-8341-4d63-bb9e-3b665b27591b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.773930 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-inventory" (OuterVolumeSpecName: "inventory") pod "b48e7be3-8341-4d63-bb9e-3b665b27591b" (UID: "b48e7be3-8341-4d63-bb9e-3b665b27591b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.788883 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfnnk\" (UniqueName: \"kubernetes.io/projected/b48e7be3-8341-4d63-bb9e-3b665b27591b-kube-api-access-gfnnk\") on node \"crc\" DevicePath \"\"" Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.788914 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:11:18 crc kubenswrapper[4853]: I0127 19:11:18.788925 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b48e7be3-8341-4d63-bb9e-3b665b27591b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.154236 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" event={"ID":"b48e7be3-8341-4d63-bb9e-3b665b27591b","Type":"ContainerDied","Data":"0a2542a9f05e4d2b84574083fdf05b00994ccf5dff1836be76f6df193e720aa6"} Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.154280 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2542a9f05e4d2b84574083fdf05b00994ccf5dff1836be76f6df193e720aa6" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.154345 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t84cd" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.237684 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk"] Jan 27 19:11:19 crc kubenswrapper[4853]: E0127 19:11:19.238188 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48e7be3-8341-4d63-bb9e-3b665b27591b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.238205 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48e7be3-8341-4d63-bb9e-3b665b27591b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.238386 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48e7be3-8341-4d63-bb9e-3b665b27591b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.239369 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.242044 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.242844 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.243735 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.244371 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.263450 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk"] Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.301835 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.301908 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdc5k\" (UniqueName: \"kubernetes.io/projected/51db77a7-69eb-4145-b87c-abfbb514f2c7-kube-api-access-kdc5k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.302315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.403676 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.403726 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdc5k\" (UniqueName: \"kubernetes.io/projected/51db77a7-69eb-4145-b87c-abfbb514f2c7-kube-api-access-kdc5k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.403847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.408275 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.408434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.427146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdc5k\" (UniqueName: \"kubernetes.io/projected/51db77a7-69eb-4145-b87c-abfbb514f2c7-kube-api-access-kdc5k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vwklk\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:19 crc kubenswrapper[4853]: I0127 19:11:19.563875 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:20 crc kubenswrapper[4853]: I0127 19:11:20.076742 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk"] Jan 27 19:11:20 crc kubenswrapper[4853]: I0127 19:11:20.166057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" event={"ID":"51db77a7-69eb-4145-b87c-abfbb514f2c7","Type":"ContainerStarted","Data":"f3e3b5b2c8773e9bf4365e9bebcd221302760d966e37096f05d2ed6555289e8e"} Jan 27 19:11:21 crc kubenswrapper[4853]: I0127 19:11:21.175911 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" event={"ID":"51db77a7-69eb-4145-b87c-abfbb514f2c7","Type":"ContainerStarted","Data":"2a414a305244541f106eeda57d22f3d7d627147f57d39eacc44831241a7b9b8e"} Jan 27 19:11:21 crc kubenswrapper[4853]: I0127 19:11:21.203551 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" podStartSLOduration=1.73458208 podStartE2EDuration="2.203530785s" podCreationTimestamp="2026-01-27 19:11:19 +0000 UTC" firstStartedPulling="2026-01-27 19:11:20.083912387 +0000 UTC m=+1722.546455270" lastFinishedPulling="2026-01-27 19:11:20.552861092 +0000 UTC m=+1723.015403975" observedRunningTime="2026-01-27 19:11:21.193143935 +0000 UTC m=+1723.655686838" watchObservedRunningTime="2026-01-27 19:11:21.203530785 +0000 UTC m=+1723.666073668" Jan 27 19:11:21 crc kubenswrapper[4853]: I0127 19:11:21.883716 4853 scope.go:117] "RemoveContainer" containerID="42c545f9f78b908ce08838b23cd42d672651aed1a85a7c0cb36a4907f5cc18d2" Jan 27 19:11:21 crc kubenswrapper[4853]: I0127 19:11:21.934014 4853 scope.go:117] "RemoveContainer" containerID="91ec58dd7d60be51dc68e2c014bbea8e37215eaabc39edde7c5169a3427f51fa" Jan 27 19:11:22 crc kubenswrapper[4853]: I0127 19:11:22.007900 4853 scope.go:117] "RemoveContainer" containerID="cc8940826dd753b567ef87e5a883ab069a721e6a3fd23c08bae813a1448c0845" Jan 27 19:11:22 crc kubenswrapper[4853]: I0127 19:11:22.044909 4853 scope.go:117] "RemoveContainer" containerID="e4505d967429000bedd61103c44aeb6c70797166282a33d15cab46e18f4ac744" Jan 27 19:11:24 crc kubenswrapper[4853]: I0127 19:11:24.104085 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4157-account-create-update-nqmcn"] Jan 27 19:11:24 crc kubenswrapper[4853]: I0127 19:11:24.125491 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4157-account-create-update-nqmcn"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.034175 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cj6jf"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.042910 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cj6jf"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.051000 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b463-account-create-update-4drwb"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.058793 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-27f9-account-create-update-wt8ts"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.067063 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b463-account-create-update-4drwb"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.078170 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-27f9-account-create-update-wt8ts"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.087630 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-h4s4v"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.097860 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-h4s4v"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.104933 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xgbwz"] Jan 27 19:11:25 crc kubenswrapper[4853]: I0127 19:11:25.112410 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xgbwz"] Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.124007 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1169617c-cfd9-438b-ac93-a636384abe7c" path="/var/lib/kubelet/pods/1169617c-cfd9-438b-ac93-a636384abe7c/volumes" Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.125039 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ceb016-348f-4b14-9f21-11d533ad51ee" path="/var/lib/kubelet/pods/15ceb016-348f-4b14-9f21-11d533ad51ee/volumes" Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.125732 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63cae008-ec5c-4e56-907b-84e3dfa274e2" path="/var/lib/kubelet/pods/63cae008-ec5c-4e56-907b-84e3dfa274e2/volumes" Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.126460 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6" path="/var/lib/kubelet/pods/6edf7e9f-ea9b-4044-8aaf-ad9de78dcab6/volumes" Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.127672 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b989c118-b790-4364-8452-a6f3e2fa75d5" path="/var/lib/kubelet/pods/b989c118-b790-4364-8452-a6f3e2fa75d5/volumes" Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.128313 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbee157b-ef42-498f-97a0-e8159be13fef" path="/var/lib/kubelet/pods/cbee157b-ef42-498f-97a0-e8159be13fef/volumes" Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.237450 4853 generic.go:334] "Generic (PLEG): container finished" podID="51db77a7-69eb-4145-b87c-abfbb514f2c7" containerID="2a414a305244541f106eeda57d22f3d7d627147f57d39eacc44831241a7b9b8e" exitCode=0 Jan 27 19:11:26 crc kubenswrapper[4853]: I0127 19:11:26.237494 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" event={"ID":"51db77a7-69eb-4145-b87c-abfbb514f2c7","Type":"ContainerDied","Data":"2a414a305244541f106eeda57d22f3d7d627147f57d39eacc44831241a7b9b8e"} Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.112807 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:11:27 crc kubenswrapper[4853]: E0127 19:11:27.113448 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.660703 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.776103 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdc5k\" (UniqueName: \"kubernetes.io/projected/51db77a7-69eb-4145-b87c-abfbb514f2c7-kube-api-access-kdc5k\") pod \"51db77a7-69eb-4145-b87c-abfbb514f2c7\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.776313 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-ssh-key-openstack-edpm-ipam\") pod \"51db77a7-69eb-4145-b87c-abfbb514f2c7\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.776591 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-inventory\") pod \"51db77a7-69eb-4145-b87c-abfbb514f2c7\" (UID: \"51db77a7-69eb-4145-b87c-abfbb514f2c7\") " Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.781720 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51db77a7-69eb-4145-b87c-abfbb514f2c7-kube-api-access-kdc5k" (OuterVolumeSpecName: "kube-api-access-kdc5k") pod "51db77a7-69eb-4145-b87c-abfbb514f2c7" (UID: "51db77a7-69eb-4145-b87c-abfbb514f2c7"). InnerVolumeSpecName "kube-api-access-kdc5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.802905 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-inventory" (OuterVolumeSpecName: "inventory") pod "51db77a7-69eb-4145-b87c-abfbb514f2c7" (UID: "51db77a7-69eb-4145-b87c-abfbb514f2c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.805389 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51db77a7-69eb-4145-b87c-abfbb514f2c7" (UID: "51db77a7-69eb-4145-b87c-abfbb514f2c7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.879666 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.879720 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdc5k\" (UniqueName: \"kubernetes.io/projected/51db77a7-69eb-4145-b87c-abfbb514f2c7-kube-api-access-kdc5k\") on node \"crc\" DevicePath \"\"" Jan 27 19:11:27 crc kubenswrapper[4853]: I0127 19:11:27.879732 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51db77a7-69eb-4145-b87c-abfbb514f2c7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.256103 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" event={"ID":"51db77a7-69eb-4145-b87c-abfbb514f2c7","Type":"ContainerDied","Data":"f3e3b5b2c8773e9bf4365e9bebcd221302760d966e37096f05d2ed6555289e8e"} Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.256162 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3e3b5b2c8773e9bf4365e9bebcd221302760d966e37096f05d2ed6555289e8e" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.256191 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vwklk" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.327308 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf"] Jan 27 19:11:28 crc kubenswrapper[4853]: E0127 19:11:28.327841 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51db77a7-69eb-4145-b87c-abfbb514f2c7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.327868 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51db77a7-69eb-4145-b87c-abfbb514f2c7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.328203 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="51db77a7-69eb-4145-b87c-abfbb514f2c7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.328973 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.331907 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.332111 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.332258 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.332370 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.342180 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf"] Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.491033 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.491577 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrt4k\" (UniqueName: \"kubernetes.io/projected/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-kube-api-access-lrt4k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.491659 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.593465 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrt4k\" (UniqueName: \"kubernetes.io/projected/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-kube-api-access-lrt4k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.593534 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.593623 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.597448 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.597984 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.611452 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrt4k\" (UniqueName: \"kubernetes.io/projected/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-kube-api-access-lrt4k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dcnnf\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:28 crc kubenswrapper[4853]: I0127 19:11:28.645996 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:11:29 crc kubenswrapper[4853]: I0127 19:11:29.403918 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf"] Jan 27 19:11:30 crc kubenswrapper[4853]: I0127 19:11:30.277407 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" event={"ID":"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536","Type":"ContainerStarted","Data":"1af709d4967281f3589977ead1fefed84d84936c3f5f6f468af7f71251686209"} Jan 27 19:11:30 crc kubenswrapper[4853]: I0127 19:11:30.277849 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" event={"ID":"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536","Type":"ContainerStarted","Data":"8b3742ba54081f3344f292fcd47cae42107e919d0d8a46ea1b102eaf652ab795"} Jan 27 19:11:30 crc kubenswrapper[4853]: I0127 19:11:30.300716 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" podStartSLOduration=1.908790834 podStartE2EDuration="2.300693364s" podCreationTimestamp="2026-01-27 19:11:28 +0000 UTC" firstStartedPulling="2026-01-27 19:11:29.415299571 +0000 UTC m=+1731.877842454" lastFinishedPulling="2026-01-27 19:11:29.807202101 +0000 UTC m=+1732.269744984" observedRunningTime="2026-01-27 19:11:30.293953659 +0000 UTC m=+1732.756496562" watchObservedRunningTime="2026-01-27 19:11:30.300693364 +0000 UTC m=+1732.763236247" Jan 27 19:11:40 crc kubenswrapper[4853]: I0127 19:11:40.113198 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:11:40 crc kubenswrapper[4853]: E0127 19:11:40.113992 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:11:51 crc kubenswrapper[4853]: I0127 19:11:51.112453 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:11:51 crc kubenswrapper[4853]: E0127 19:11:51.113072 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:11:56 crc kubenswrapper[4853]: I0127 19:11:56.040418 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsnwz"] Jan 27 19:11:56 crc kubenswrapper[4853]: I0127 19:11:56.046683 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsnwz"] Jan 27 19:11:56 crc kubenswrapper[4853]: I0127 19:11:56.125630 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6fa082-0473-46ce-815c-bee7d4d2903a" path="/var/lib/kubelet/pods/5e6fa082-0473-46ce-815c-bee7d4d2903a/volumes" Jan 27 19:12:03 crc kubenswrapper[4853]: I0127 19:12:03.118967 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:12:03 crc kubenswrapper[4853]: E0127 19:12:03.121002 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:12:11 crc kubenswrapper[4853]: I0127 19:12:11.636472 4853 generic.go:334] "Generic (PLEG): container finished" podID="f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" containerID="1af709d4967281f3589977ead1fefed84d84936c3f5f6f468af7f71251686209" exitCode=0 Jan 27 19:12:11 crc kubenswrapper[4853]: I0127 19:12:11.636580 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" event={"ID":"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536","Type":"ContainerDied","Data":"1af709d4967281f3589977ead1fefed84d84936c3f5f6f468af7f71251686209"} Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.104979 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.207604 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-inventory\") pod \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.207858 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-ssh-key-openstack-edpm-ipam\") pod \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.207893 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrt4k\" (UniqueName: \"kubernetes.io/projected/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-kube-api-access-lrt4k\") pod \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\" (UID: \"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536\") " Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.214864 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-kube-api-access-lrt4k" (OuterVolumeSpecName: "kube-api-access-lrt4k") pod "f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" (UID: "f23ab0fa-bd1a-4494-a7fe-428f0b8ea536"). InnerVolumeSpecName "kube-api-access-lrt4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.240249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-inventory" (OuterVolumeSpecName: "inventory") pod "f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" (UID: "f23ab0fa-bd1a-4494-a7fe-428f0b8ea536"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.252755 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" (UID: "f23ab0fa-bd1a-4494-a7fe-428f0b8ea536"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.312468 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.312517 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.312537 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrt4k\" (UniqueName: \"kubernetes.io/projected/f23ab0fa-bd1a-4494-a7fe-428f0b8ea536-kube-api-access-lrt4k\") on node \"crc\" DevicePath \"\"" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.662837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" event={"ID":"f23ab0fa-bd1a-4494-a7fe-428f0b8ea536","Type":"ContainerDied","Data":"8b3742ba54081f3344f292fcd47cae42107e919d0d8a46ea1b102eaf652ab795"} Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.663375 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3742ba54081f3344f292fcd47cae42107e919d0d8a46ea1b102eaf652ab795" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.662919 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dcnnf" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.775298 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz"] Jan 27 19:12:13 crc kubenswrapper[4853]: E0127 19:12:13.776373 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.776422 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.777097 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23ab0fa-bd1a-4494-a7fe-428f0b8ea536" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.778839 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.786399 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz"] Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.814688 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.816929 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.817000 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.817329 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.928435 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.928561 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:13 crc kubenswrapper[4853]: I0127 19:12:13.928660 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2bb\" (UniqueName: \"kubernetes.io/projected/d3496093-310a-422a-a09c-d796470ad2c0-kube-api-access-7p2bb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.032377 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.032481 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.032595 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2bb\" (UniqueName: \"kubernetes.io/projected/d3496093-310a-422a-a09c-d796470ad2c0-kube-api-access-7p2bb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.041686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.042725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.056674 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2bb\" (UniqueName: \"kubernetes.io/projected/d3496093-310a-422a-a09c-d796470ad2c0-kube-api-access-7p2bb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.116617 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:12:14 crc kubenswrapper[4853]: E0127 19:12:14.117522 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.138180 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.664430 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz"] Jan 27 19:12:14 crc kubenswrapper[4853]: I0127 19:12:14.674752 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" event={"ID":"d3496093-310a-422a-a09c-d796470ad2c0","Type":"ContainerStarted","Data":"39b7e82593aba640653d080fcebcf3aaaa1d9aed8ef489878a3fc780b24166e6"} Jan 27 19:12:15 crc kubenswrapper[4853]: I0127 19:12:15.696663 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" event={"ID":"d3496093-310a-422a-a09c-d796470ad2c0","Type":"ContainerStarted","Data":"44324a458faba3a1268f03c58c037db438303f454d10255c18097621cb72448c"} Jan 27 19:12:15 crc kubenswrapper[4853]: I0127 19:12:15.719303 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" podStartSLOduration=2.163375506 podStartE2EDuration="2.719273082s" podCreationTimestamp="2026-01-27 19:12:13 +0000 UTC" firstStartedPulling="2026-01-27 19:12:14.670090719 +0000 UTC m=+1777.132633612" lastFinishedPulling="2026-01-27 19:12:15.225988305 +0000 UTC m=+1777.688531188" observedRunningTime="2026-01-27 19:12:15.719112628 +0000 UTC m=+1778.181655511" watchObservedRunningTime="2026-01-27 19:12:15.719273082 +0000 UTC m=+1778.181815985" Jan 27 19:12:18 crc kubenswrapper[4853]: I0127 19:12:18.075796 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x47j6"] Jan 27 19:12:18 crc kubenswrapper[4853]: I0127 19:12:18.090261 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x47j6"] Jan 27 19:12:18 crc kubenswrapper[4853]: I0127 19:12:18.125729 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d74daf6-98b2-437c-8415-3053a40cedef" path="/var/lib/kubelet/pods/1d74daf6-98b2-437c-8415-3053a40cedef/volumes" Jan 27 19:12:21 crc kubenswrapper[4853]: I0127 19:12:21.049529 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8nwnw"] Jan 27 19:12:21 crc kubenswrapper[4853]: I0127 19:12:21.060609 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8nwnw"] Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.125873 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89f7e94-9d6b-4279-b0b8-a91c47c904c8" path="/var/lib/kubelet/pods/e89f7e94-9d6b-4279-b0b8-a91c47c904c8/volumes" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.187471 4853 scope.go:117] "RemoveContainer" containerID="7a9369de3a9d7cd8e7d1727e1e8b80da0db3ff0d5b8ee042a34f3d55e11cc52e" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.213682 4853 scope.go:117] "RemoveContainer" containerID="9dc4ee7658a4827b9f14f6839c7045d8c86db3a483e3c4c1d0bc1896f940cf2c" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.267057 4853 scope.go:117] "RemoveContainer" containerID="a98bd289b32697a16dcaded12814feb0a0d8d55feea0b3374af72e06573f88d1" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.315973 4853 scope.go:117] "RemoveContainer" containerID="781e6770921c04691dfc5dcee9c4ded83169edfe2b71a6aa6e7c0413eb09d83d" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.362199 4853 scope.go:117] "RemoveContainer" containerID="9188f3bd1cadb16ee740a9cb756693c3b2a35b30a5d57553b81c9033d291ad75" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.450720 4853 scope.go:117] "RemoveContainer" containerID="52317a64e68a16c8b047ca13cad075818d9744ad71bd88af6176cdc349b71664" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.471602 4853 scope.go:117] "RemoveContainer" containerID="859b82117b3e0c722a80c5823aaa9a5e09579b4f7d3db5fa9a9e3ef29e92e982" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.501454 4853 scope.go:117] "RemoveContainer" containerID="e9081957ee7b8bfae736045631f2e30d73ff9f49beb4c6f1154fbcfcb2d3aaba" Jan 27 19:12:22 crc kubenswrapper[4853]: I0127 19:12:22.557600 4853 scope.go:117] "RemoveContainer" containerID="1d04d40a9e612f7b6bf82fe559b7182c47fa40be83ffd7fcf2a1b937dc51b61b" Jan 27 19:12:29 crc kubenswrapper[4853]: I0127 19:12:29.114547 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:12:29 crc kubenswrapper[4853]: E0127 19:12:29.116387 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:12:44 crc kubenswrapper[4853]: I0127 19:12:44.112544 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:12:44 crc kubenswrapper[4853]: E0127 19:12:44.113535 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:12:58 crc kubenswrapper[4853]: I0127 19:12:58.141377 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:12:58 crc kubenswrapper[4853]: E0127 19:12:58.143079 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:13:03 crc kubenswrapper[4853]: I0127 19:13:03.053723 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8l4gk"] Jan 27 19:13:03 crc kubenswrapper[4853]: I0127 19:13:03.066735 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8l4gk"] Jan 27 19:13:04 crc kubenswrapper[4853]: I0127 19:13:04.129993 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc1a2d0-3e4a-4d4e-988e-96762f754b6a" path="/var/lib/kubelet/pods/8dc1a2d0-3e4a-4d4e-988e-96762f754b6a/volumes" Jan 27 19:13:10 crc kubenswrapper[4853]: I0127 19:13:10.113705 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:13:11 crc kubenswrapper[4853]: I0127 19:13:11.283410 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"9860de66bebfe136e22c65a35152f3d07b6273ddbf8d7d2c458f3a87d024b6f1"} Jan 27 19:13:12 crc kubenswrapper[4853]: I0127 19:13:12.300907 4853 generic.go:334] "Generic (PLEG): container finished" podID="d3496093-310a-422a-a09c-d796470ad2c0" containerID="44324a458faba3a1268f03c58c037db438303f454d10255c18097621cb72448c" exitCode=0 Jan 27 19:13:12 crc kubenswrapper[4853]: I0127 19:13:12.301016 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" event={"ID":"d3496093-310a-422a-a09c-d796470ad2c0","Type":"ContainerDied","Data":"44324a458faba3a1268f03c58c037db438303f454d10255c18097621cb72448c"} Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.758659 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.846242 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-inventory\") pod \"d3496093-310a-422a-a09c-d796470ad2c0\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.846344 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-ssh-key-openstack-edpm-ipam\") pod \"d3496093-310a-422a-a09c-d796470ad2c0\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.846438 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2bb\" (UniqueName: \"kubernetes.io/projected/d3496093-310a-422a-a09c-d796470ad2c0-kube-api-access-7p2bb\") pod \"d3496093-310a-422a-a09c-d796470ad2c0\" (UID: \"d3496093-310a-422a-a09c-d796470ad2c0\") " Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.855630 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3496093-310a-422a-a09c-d796470ad2c0-kube-api-access-7p2bb" (OuterVolumeSpecName: "kube-api-access-7p2bb") pod "d3496093-310a-422a-a09c-d796470ad2c0" (UID: "d3496093-310a-422a-a09c-d796470ad2c0"). InnerVolumeSpecName "kube-api-access-7p2bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.888495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-inventory" (OuterVolumeSpecName: "inventory") pod "d3496093-310a-422a-a09c-d796470ad2c0" (UID: "d3496093-310a-422a-a09c-d796470ad2c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.898925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3496093-310a-422a-a09c-d796470ad2c0" (UID: "d3496093-310a-422a-a09c-d796470ad2c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.949075 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2bb\" (UniqueName: \"kubernetes.io/projected/d3496093-310a-422a-a09c-d796470ad2c0-kube-api-access-7p2bb\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.949178 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:13 crc kubenswrapper[4853]: I0127 19:13:13.949190 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3496093-310a-422a-a09c-d796470ad2c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.329409 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" event={"ID":"d3496093-310a-422a-a09c-d796470ad2c0","Type":"ContainerDied","Data":"39b7e82593aba640653d080fcebcf3aaaa1d9aed8ef489878a3fc780b24166e6"} Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.329464 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b7e82593aba640653d080fcebcf3aaaa1d9aed8ef489878a3fc780b24166e6" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.329528 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.430097 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p45vd"] Jan 27 19:13:14 crc kubenswrapper[4853]: E0127 19:13:14.430799 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3496093-310a-422a-a09c-d796470ad2c0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.430826 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3496093-310a-422a-a09c-d796470ad2c0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.431038 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3496093-310a-422a-a09c-d796470ad2c0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.432042 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.435788 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.436176 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.436313 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.450702 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.488475 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p45vd"] Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.570671 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bdp\" (UniqueName: \"kubernetes.io/projected/eb152926-dd69-4634-9220-0074823b049b-kube-api-access-56bdp\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.570793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.571054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.673308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bdp\" (UniqueName: \"kubernetes.io/projected/eb152926-dd69-4634-9220-0074823b049b-kube-api-access-56bdp\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.673382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.673471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.681154 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.681601 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.694829 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bdp\" (UniqueName: \"kubernetes.io/projected/eb152926-dd69-4634-9220-0074823b049b-kube-api-access-56bdp\") pod \"ssh-known-hosts-edpm-deployment-p45vd\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:14 crc kubenswrapper[4853]: I0127 19:13:14.754000 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:15 crc kubenswrapper[4853]: I0127 19:13:15.376426 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-p45vd"] Jan 27 19:13:16 crc kubenswrapper[4853]: I0127 19:13:16.353930 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" event={"ID":"eb152926-dd69-4634-9220-0074823b049b","Type":"ContainerStarted","Data":"14808feb0ce73fc36cccb96cdacf8fce7013901810a44fc64a000a969e684083"} Jan 27 19:13:16 crc kubenswrapper[4853]: I0127 19:13:16.354423 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" event={"ID":"eb152926-dd69-4634-9220-0074823b049b","Type":"ContainerStarted","Data":"7ce15e395f6f1b34255045a2466267622a03877d27a969679aea5ea46d57801d"} Jan 27 19:13:16 crc kubenswrapper[4853]: I0127 19:13:16.381962 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" podStartSLOduration=1.815917284 podStartE2EDuration="2.381924832s" podCreationTimestamp="2026-01-27 19:13:14 +0000 UTC" firstStartedPulling="2026-01-27 19:13:15.377578364 +0000 UTC m=+1837.840121247" lastFinishedPulling="2026-01-27 19:13:15.943585912 +0000 UTC m=+1838.406128795" observedRunningTime="2026-01-27 19:13:16.374858088 +0000 UTC m=+1838.837401011" watchObservedRunningTime="2026-01-27 19:13:16.381924832 +0000 UTC m=+1838.844467705" Jan 27 19:13:22 crc kubenswrapper[4853]: I0127 19:13:22.757624 4853 scope.go:117] "RemoveContainer" containerID="bacbe340baa0e6dab1d9ce942e2fa624357095e9948c34ca5b78385fe14abebf" Jan 27 19:13:24 crc kubenswrapper[4853]: I0127 19:13:24.454323 4853 generic.go:334] "Generic (PLEG): container finished" podID="eb152926-dd69-4634-9220-0074823b049b" containerID="14808feb0ce73fc36cccb96cdacf8fce7013901810a44fc64a000a969e684083" exitCode=0 Jan 27 19:13:24 crc kubenswrapper[4853]: I0127 19:13:24.454386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" event={"ID":"eb152926-dd69-4634-9220-0074823b049b","Type":"ContainerDied","Data":"14808feb0ce73fc36cccb96cdacf8fce7013901810a44fc64a000a969e684083"} Jan 27 19:13:25 crc kubenswrapper[4853]: I0127 19:13:25.963528 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.062842 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-ssh-key-openstack-edpm-ipam\") pod \"eb152926-dd69-4634-9220-0074823b049b\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.064326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-inventory-0\") pod \"eb152926-dd69-4634-9220-0074823b049b\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.064742 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56bdp\" (UniqueName: \"kubernetes.io/projected/eb152926-dd69-4634-9220-0074823b049b-kube-api-access-56bdp\") pod \"eb152926-dd69-4634-9220-0074823b049b\" (UID: \"eb152926-dd69-4634-9220-0074823b049b\") " Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.075436 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb152926-dd69-4634-9220-0074823b049b-kube-api-access-56bdp" (OuterVolumeSpecName: "kube-api-access-56bdp") pod "eb152926-dd69-4634-9220-0074823b049b" (UID: "eb152926-dd69-4634-9220-0074823b049b"). InnerVolumeSpecName "kube-api-access-56bdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.098958 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "eb152926-dd69-4634-9220-0074823b049b" (UID: "eb152926-dd69-4634-9220-0074823b049b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.103865 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb152926-dd69-4634-9220-0074823b049b" (UID: "eb152926-dd69-4634-9220-0074823b049b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.168067 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56bdp\" (UniqueName: \"kubernetes.io/projected/eb152926-dd69-4634-9220-0074823b049b-kube-api-access-56bdp\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.168114 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.168136 4853 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eb152926-dd69-4634-9220-0074823b049b-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.477954 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" event={"ID":"eb152926-dd69-4634-9220-0074823b049b","Type":"ContainerDied","Data":"7ce15e395f6f1b34255045a2466267622a03877d27a969679aea5ea46d57801d"} Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.478038 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce15e395f6f1b34255045a2466267622a03877d27a969679aea5ea46d57801d" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.478074 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-p45vd" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.570068 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx"] Jan 27 19:13:26 crc kubenswrapper[4853]: E0127 19:13:26.572266 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb152926-dd69-4634-9220-0074823b049b" containerName="ssh-known-hosts-edpm-deployment" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.572293 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb152926-dd69-4634-9220-0074823b049b" containerName="ssh-known-hosts-edpm-deployment" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.572558 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb152926-dd69-4634-9220-0074823b049b" containerName="ssh-known-hosts-edpm-deployment" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.576784 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.579817 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.579964 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.581255 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.581377 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.590910 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx"] Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.680561 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.680662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.680723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw4w9\" (UniqueName: \"kubernetes.io/projected/42b00b77-5a5c-4880-a0f0-2556bab179fd-kube-api-access-rw4w9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.782603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.782690 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.782731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw4w9\" (UniqueName: \"kubernetes.io/projected/42b00b77-5a5c-4880-a0f0-2556bab179fd-kube-api-access-rw4w9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.787366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.792579 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.804622 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw4w9\" (UniqueName: \"kubernetes.io/projected/42b00b77-5a5c-4880-a0f0-2556bab179fd-kube-api-access-rw4w9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qq4jx\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:26 crc kubenswrapper[4853]: I0127 19:13:26.898730 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:27 crc kubenswrapper[4853]: I0127 19:13:27.519156 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx"] Jan 27 19:13:28 crc kubenswrapper[4853]: I0127 19:13:28.500337 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" event={"ID":"42b00b77-5a5c-4880-a0f0-2556bab179fd","Type":"ContainerStarted","Data":"5f19e59489ce5b909dfd0bec5c5d66031a6bdc3896315a93026a68991fdb7abc"} Jan 27 19:13:28 crc kubenswrapper[4853]: I0127 19:13:28.501369 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" event={"ID":"42b00b77-5a5c-4880-a0f0-2556bab179fd","Type":"ContainerStarted","Data":"2c9cf49e7a48edf119750ec466c9dc212ae6d981534222dc104da662afdc17eb"} Jan 27 19:13:37 crc kubenswrapper[4853]: I0127 19:13:37.578595 4853 generic.go:334] "Generic (PLEG): container finished" podID="42b00b77-5a5c-4880-a0f0-2556bab179fd" containerID="5f19e59489ce5b909dfd0bec5c5d66031a6bdc3896315a93026a68991fdb7abc" exitCode=0 Jan 27 19:13:37 crc kubenswrapper[4853]: I0127 19:13:37.579191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" event={"ID":"42b00b77-5a5c-4880-a0f0-2556bab179fd","Type":"ContainerDied","Data":"5f19e59489ce5b909dfd0bec5c5d66031a6bdc3896315a93026a68991fdb7abc"} Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.058475 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.159456 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-inventory\") pod \"42b00b77-5a5c-4880-a0f0-2556bab179fd\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.159675 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-ssh-key-openstack-edpm-ipam\") pod \"42b00b77-5a5c-4880-a0f0-2556bab179fd\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.159814 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw4w9\" (UniqueName: \"kubernetes.io/projected/42b00b77-5a5c-4880-a0f0-2556bab179fd-kube-api-access-rw4w9\") pod \"42b00b77-5a5c-4880-a0f0-2556bab179fd\" (UID: \"42b00b77-5a5c-4880-a0f0-2556bab179fd\") " Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.193799 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b00b77-5a5c-4880-a0f0-2556bab179fd-kube-api-access-rw4w9" (OuterVolumeSpecName: "kube-api-access-rw4w9") pod "42b00b77-5a5c-4880-a0f0-2556bab179fd" (UID: "42b00b77-5a5c-4880-a0f0-2556bab179fd"). InnerVolumeSpecName "kube-api-access-rw4w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.285494 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw4w9\" (UniqueName: \"kubernetes.io/projected/42b00b77-5a5c-4880-a0f0-2556bab179fd-kube-api-access-rw4w9\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.365414 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "42b00b77-5a5c-4880-a0f0-2556bab179fd" (UID: "42b00b77-5a5c-4880-a0f0-2556bab179fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.387432 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.390828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-inventory" (OuterVolumeSpecName: "inventory") pod "42b00b77-5a5c-4880-a0f0-2556bab179fd" (UID: "42b00b77-5a5c-4880-a0f0-2556bab179fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.489559 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b00b77-5a5c-4880-a0f0-2556bab179fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.602287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" event={"ID":"42b00b77-5a5c-4880-a0f0-2556bab179fd","Type":"ContainerDied","Data":"2c9cf49e7a48edf119750ec466c9dc212ae6d981534222dc104da662afdc17eb"} Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.602342 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c9cf49e7a48edf119750ec466c9dc212ae6d981534222dc104da662afdc17eb" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.602412 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qq4jx" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.667346 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk"] Jan 27 19:13:39 crc kubenswrapper[4853]: E0127 19:13:39.667799 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b00b77-5a5c-4880-a0f0-2556bab179fd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.667820 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b00b77-5a5c-4880-a0f0-2556bab179fd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.668035 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b00b77-5a5c-4880-a0f0-2556bab179fd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.668715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.670984 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.671682 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.671686 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.676045 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk"] Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.678268 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.794478 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.794551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8m75\" (UniqueName: \"kubernetes.io/projected/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-kube-api-access-s8m75\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.794599 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.896745 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.896828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8m75\" (UniqueName: \"kubernetes.io/projected/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-kube-api-access-s8m75\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.896866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.900439 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.900616 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:39 crc kubenswrapper[4853]: I0127 19:13:39.914091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8m75\" (UniqueName: \"kubernetes.io/projected/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-kube-api-access-s8m75\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:40 crc kubenswrapper[4853]: I0127 19:13:40.001292 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:40 crc kubenswrapper[4853]: I0127 19:13:40.606863 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk"] Jan 27 19:13:41 crc kubenswrapper[4853]: I0127 19:13:41.628794 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" event={"ID":"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7","Type":"ContainerStarted","Data":"3f3a8e349649008f08e81588a85ad774669206dd01fde452c33ab3c7c06ec98b"} Jan 27 19:13:41 crc kubenswrapper[4853]: I0127 19:13:41.629531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" event={"ID":"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7","Type":"ContainerStarted","Data":"702ecee86af957d13abce0c501c4a1603d2c97f4c382e763a45127e0edcb79af"} Jan 27 19:13:41 crc kubenswrapper[4853]: I0127 19:13:41.652857 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" podStartSLOduration=2.161226943 podStartE2EDuration="2.652837432s" podCreationTimestamp="2026-01-27 19:13:39 +0000 UTC" firstStartedPulling="2026-01-27 19:13:40.611003591 +0000 UTC m=+1863.073546474" lastFinishedPulling="2026-01-27 19:13:41.10261408 +0000 UTC m=+1863.565156963" observedRunningTime="2026-01-27 19:13:41.648020303 +0000 UTC m=+1864.110563176" watchObservedRunningTime="2026-01-27 19:13:41.652837432 +0000 UTC m=+1864.115380315" Jan 27 19:13:51 crc kubenswrapper[4853]: I0127 19:13:51.729151 4853 generic.go:334] "Generic (PLEG): container finished" podID="57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" containerID="3f3a8e349649008f08e81588a85ad774669206dd01fde452c33ab3c7c06ec98b" exitCode=0 Jan 27 19:13:51 crc kubenswrapper[4853]: I0127 19:13:51.729929 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" event={"ID":"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7","Type":"ContainerDied","Data":"3f3a8e349649008f08e81588a85ad774669206dd01fde452c33ab3c7c06ec98b"} Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.139137 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.299819 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8m75\" (UniqueName: \"kubernetes.io/projected/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-kube-api-access-s8m75\") pod \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.299947 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-inventory\") pod \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.300101 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-ssh-key-openstack-edpm-ipam\") pod \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\" (UID: \"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7\") " Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.306663 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-kube-api-access-s8m75" (OuterVolumeSpecName: "kube-api-access-s8m75") pod "57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" (UID: "57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7"). InnerVolumeSpecName "kube-api-access-s8m75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.328719 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-inventory" (OuterVolumeSpecName: "inventory") pod "57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" (UID: "57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.350380 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" (UID: "57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.402215 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.402249 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.402266 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8m75\" (UniqueName: \"kubernetes.io/projected/57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7-kube-api-access-s8m75\") on node \"crc\" DevicePath \"\"" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.747863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" event={"ID":"57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7","Type":"ContainerDied","Data":"702ecee86af957d13abce0c501c4a1603d2c97f4c382e763a45127e0edcb79af"} Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.747907 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="702ecee86af957d13abce0c501c4a1603d2c97f4c382e763a45127e0edcb79af" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.747934 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.928656 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b"] Jan 27 19:13:53 crc kubenswrapper[4853]: E0127 19:13:53.929411 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.929436 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.929709 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.930413 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.933825 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.935923 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.935975 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.936076 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.936486 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.936523 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.936633 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.939262 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 19:13:53 crc kubenswrapper[4853]: I0127 19:13:53.944827 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b"] Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.114551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.114832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.114919 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.114978 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115070 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115214 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115265 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptbz\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-kube-api-access-bptbz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115292 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115403 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115492 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115572 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115658 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115697 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.115758 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218533 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218650 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218703 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptbz\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-kube-api-access-bptbz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218754 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218843 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218919 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218957 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.218998 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.219027 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.219100 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.224478 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.224861 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.225052 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.225786 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.225868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.226388 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.226821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.226830 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.227342 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.228520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.228793 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.229096 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.236248 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.242584 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptbz\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-kube-api-access-bptbz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.246658 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.553281 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b"] Jan 27 19:13:54 crc kubenswrapper[4853]: I0127 19:13:54.756539 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" event={"ID":"bbb5fe03-6098-4e03-ab85-5a28e090f13c","Type":"ContainerStarted","Data":"6140468b609daad7769812b37f67091381ba4d310a8192218d44387f8c94ebb1"} Jan 27 19:13:55 crc kubenswrapper[4853]: I0127 19:13:55.765343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" event={"ID":"bbb5fe03-6098-4e03-ab85-5a28e090f13c","Type":"ContainerStarted","Data":"11364be238f18ef269811244bded25536ce5a7b15aa1be2484bb67ba4169f38d"} Jan 27 19:13:55 crc kubenswrapper[4853]: I0127 19:13:55.790964 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" podStartSLOduration=2.300420397 podStartE2EDuration="2.790944305s" podCreationTimestamp="2026-01-27 19:13:53 +0000 UTC" firstStartedPulling="2026-01-27 19:13:54.565659455 +0000 UTC m=+1877.028202338" lastFinishedPulling="2026-01-27 19:13:55.056183323 +0000 UTC m=+1877.518726246" observedRunningTime="2026-01-27 19:13:55.788871135 +0000 UTC m=+1878.251414038" watchObservedRunningTime="2026-01-27 19:13:55.790944305 +0000 UTC m=+1878.253487188" Jan 27 19:14:36 crc kubenswrapper[4853]: I0127 19:14:36.167669 4853 generic.go:334] "Generic (PLEG): container finished" podID="bbb5fe03-6098-4e03-ab85-5a28e090f13c" containerID="11364be238f18ef269811244bded25536ce5a7b15aa1be2484bb67ba4169f38d" exitCode=0 Jan 27 19:14:36 crc kubenswrapper[4853]: I0127 19:14:36.167760 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" event={"ID":"bbb5fe03-6098-4e03-ab85-5a28e090f13c","Type":"ContainerDied","Data":"11364be238f18ef269811244bded25536ce5a7b15aa1be2484bb67ba4169f38d"} Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.559941 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727595 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727652 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bptbz\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-kube-api-access-bptbz\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-repo-setup-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727793 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-neutron-metadata-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727826 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727855 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ssh-key-openstack-edpm-ipam\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727895 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-telemetry-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727939 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-libvirt-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.727987 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-nova-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.728015 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-inventory\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.728041 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-bootstrap-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.728108 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ovn-combined-ca-bundle\") pod \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\" (UID: \"bbb5fe03-6098-4e03-ab85-5a28e090f13c\") " Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.734338 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.734436 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.734887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.734943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.735150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-kube-api-access-bptbz" (OuterVolumeSpecName: "kube-api-access-bptbz") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "kube-api-access-bptbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.735572 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.736783 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.737185 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.737303 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.737824 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.741240 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.741265 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.762783 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-inventory" (OuterVolumeSpecName: "inventory") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.778481 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bbb5fe03-6098-4e03-ab85-5a28e090f13c" (UID: "bbb5fe03-6098-4e03-ab85-5a28e090f13c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831043 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831086 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831145 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831157 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bptbz\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-kube-api-access-bptbz\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831169 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831182 4853 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831195 4853 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831207 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bbb5fe03-6098-4e03-ab85-5a28e090f13c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831218 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831227 4853 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831235 4853 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831243 4853 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831252 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:37 crc kubenswrapper[4853]: I0127 19:14:37.831259 4853 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb5fe03-6098-4e03-ab85-5a28e090f13c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.187073 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" event={"ID":"bbb5fe03-6098-4e03-ab85-5a28e090f13c","Type":"ContainerDied","Data":"6140468b609daad7769812b37f67091381ba4d310a8192218d44387f8c94ebb1"} Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.187174 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6140468b609daad7769812b37f67091381ba4d310a8192218d44387f8c94ebb1" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.187198 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.282692 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk"] Jan 27 19:14:38 crc kubenswrapper[4853]: E0127 19:14:38.283216 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb5fe03-6098-4e03-ab85-5a28e090f13c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.283243 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb5fe03-6098-4e03-ab85-5a28e090f13c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.283551 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb5fe03-6098-4e03-ab85-5a28e090f13c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.284272 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.288883 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.289616 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.289660 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.290010 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.290361 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.295024 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk"] Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.441432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.441501 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.441757 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.441932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwtnq\" (UniqueName: \"kubernetes.io/projected/a8e00930-5920-4f3f-9f05-62da3fdcdd88-kube-api-access-pwtnq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.442056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.543951 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.544335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.544427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.544508 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwtnq\" (UniqueName: \"kubernetes.io/projected/a8e00930-5920-4f3f-9f05-62da3fdcdd88-kube-api-access-pwtnq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.544571 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.545252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.548011 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.548610 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.549181 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.561961 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwtnq\" (UniqueName: \"kubernetes.io/projected/a8e00930-5920-4f3f-9f05-62da3fdcdd88-kube-api-access-pwtnq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-49xvk\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:38 crc kubenswrapper[4853]: I0127 19:14:38.601865 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:14:39 crc kubenswrapper[4853]: I0127 19:14:39.109303 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk"] Jan 27 19:14:39 crc kubenswrapper[4853]: I0127 19:14:39.196866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" event={"ID":"a8e00930-5920-4f3f-9f05-62da3fdcdd88","Type":"ContainerStarted","Data":"549584e85f58c8b015c41c19e74290f5fb63dc7959b4e4c46d0bf1b41eeb72a6"} Jan 27 19:14:40 crc kubenswrapper[4853]: E0127 19:14:40.124027 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb5fe03_6098_4e03_ab85_5a28e090f13c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 19:14:40 crc kubenswrapper[4853]: I0127 19:14:40.206804 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" event={"ID":"a8e00930-5920-4f3f-9f05-62da3fdcdd88","Type":"ContainerStarted","Data":"4df0a7c24e359b941bfeb8c9f248f2254b41b29cd92a6b43d82ba0a1a0747653"} Jan 27 19:14:40 crc kubenswrapper[4853]: I0127 19:14:40.230663 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" podStartSLOduration=1.6761038080000001 podStartE2EDuration="2.230642996s" podCreationTimestamp="2026-01-27 19:14:38 +0000 UTC" firstStartedPulling="2026-01-27 19:14:39.116101644 +0000 UTC m=+1921.578644527" lastFinishedPulling="2026-01-27 19:14:39.670640832 +0000 UTC m=+1922.133183715" observedRunningTime="2026-01-27 19:14:40.221897993 +0000 UTC m=+1922.684440886" watchObservedRunningTime="2026-01-27 19:14:40.230642996 +0000 UTC m=+1922.693185879" Jan 27 19:14:50 crc kubenswrapper[4853]: E0127 19:14:50.356672 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb5fe03_6098_4e03_ab85_5a28e090f13c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.147292 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq"] Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.150682 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.153112 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.157742 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq"] Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.159848 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.279134 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4d7699f-899e-4206-bf04-a8c097c52d9e-config-volume\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.279718 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7q5\" (UniqueName: \"kubernetes.io/projected/c4d7699f-899e-4206-bf04-a8c097c52d9e-kube-api-access-fr7q5\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.279885 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4d7699f-899e-4206-bf04-a8c097c52d9e-secret-volume\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.381816 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr7q5\" (UniqueName: \"kubernetes.io/projected/c4d7699f-899e-4206-bf04-a8c097c52d9e-kube-api-access-fr7q5\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.381919 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4d7699f-899e-4206-bf04-a8c097c52d9e-secret-volume\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.381993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4d7699f-899e-4206-bf04-a8c097c52d9e-config-volume\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.383304 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4d7699f-899e-4206-bf04-a8c097c52d9e-config-volume\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.392725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4d7699f-899e-4206-bf04-a8c097c52d9e-secret-volume\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.411923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr7q5\" (UniqueName: \"kubernetes.io/projected/c4d7699f-899e-4206-bf04-a8c097c52d9e-kube-api-access-fr7q5\") pod \"collect-profiles-29492355-ghvdq\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.480389 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:00 crc kubenswrapper[4853]: E0127 19:15:00.636535 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb5fe03_6098_4e03_ab85_5a28e090f13c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 19:15:00 crc kubenswrapper[4853]: I0127 19:15:00.936493 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq"] Jan 27 19:15:01 crc kubenswrapper[4853]: I0127 19:15:01.396495 4853 generic.go:334] "Generic (PLEG): container finished" podID="c4d7699f-899e-4206-bf04-a8c097c52d9e" containerID="070382a5f686c1eed5efed57a1820698f8dff6a672337ed453792e3cf7cdf030" exitCode=0 Jan 27 19:15:01 crc kubenswrapper[4853]: I0127 19:15:01.396676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" event={"ID":"c4d7699f-899e-4206-bf04-a8c097c52d9e","Type":"ContainerDied","Data":"070382a5f686c1eed5efed57a1820698f8dff6a672337ed453792e3cf7cdf030"} Jan 27 19:15:01 crc kubenswrapper[4853]: I0127 19:15:01.396947 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" event={"ID":"c4d7699f-899e-4206-bf04-a8c097c52d9e","Type":"ContainerStarted","Data":"c4e9dd59929f5b5472e75a5d1dee480066cb63244b41062fa9d8c983192434fb"} Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.758764 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.838805 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4d7699f-899e-4206-bf04-a8c097c52d9e-config-volume\") pod \"c4d7699f-899e-4206-bf04-a8c097c52d9e\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.838929 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr7q5\" (UniqueName: \"kubernetes.io/projected/c4d7699f-899e-4206-bf04-a8c097c52d9e-kube-api-access-fr7q5\") pod \"c4d7699f-899e-4206-bf04-a8c097c52d9e\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.838965 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4d7699f-899e-4206-bf04-a8c097c52d9e-secret-volume\") pod \"c4d7699f-899e-4206-bf04-a8c097c52d9e\" (UID: \"c4d7699f-899e-4206-bf04-a8c097c52d9e\") " Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.839332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d7699f-899e-4206-bf04-a8c097c52d9e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4d7699f-899e-4206-bf04-a8c097c52d9e" (UID: "c4d7699f-899e-4206-bf04-a8c097c52d9e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.839877 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4d7699f-899e-4206-bf04-a8c097c52d9e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.844914 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d7699f-899e-4206-bf04-a8c097c52d9e-kube-api-access-fr7q5" (OuterVolumeSpecName: "kube-api-access-fr7q5") pod "c4d7699f-899e-4206-bf04-a8c097c52d9e" (UID: "c4d7699f-899e-4206-bf04-a8c097c52d9e"). InnerVolumeSpecName "kube-api-access-fr7q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.845348 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d7699f-899e-4206-bf04-a8c097c52d9e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4d7699f-899e-4206-bf04-a8c097c52d9e" (UID: "c4d7699f-899e-4206-bf04-a8c097c52d9e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.942008 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr7q5\" (UniqueName: \"kubernetes.io/projected/c4d7699f-899e-4206-bf04-a8c097c52d9e-kube-api-access-fr7q5\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:02 crc kubenswrapper[4853]: I0127 19:15:02.942047 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4d7699f-899e-4206-bf04-a8c097c52d9e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[4853]: I0127 19:15:03.438291 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" event={"ID":"c4d7699f-899e-4206-bf04-a8c097c52d9e","Type":"ContainerDied","Data":"c4e9dd59929f5b5472e75a5d1dee480066cb63244b41062fa9d8c983192434fb"} Jan 27 19:15:03 crc kubenswrapper[4853]: I0127 19:15:03.438372 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e9dd59929f5b5472e75a5d1dee480066cb63244b41062fa9d8c983192434fb" Jan 27 19:15:03 crc kubenswrapper[4853]: I0127 19:15:03.438459 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-ghvdq" Jan 27 19:15:10 crc kubenswrapper[4853]: E0127 19:15:10.856607 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb5fe03_6098_4e03_ab85_5a28e090f13c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 19:15:21 crc kubenswrapper[4853]: E0127 19:15:21.064591 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb5fe03_6098_4e03_ab85_5a28e090f13c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 19:15:22 crc kubenswrapper[4853]: I0127 19:15:22.860740 4853 scope.go:117] "RemoveContainer" containerID="3254a38fe43f8e795a99f0b3f2c32bf188b8cd0d97ecf4da69b002b7b0fffa74" Jan 27 19:15:22 crc kubenswrapper[4853]: I0127 19:15:22.899506 4853 scope.go:117] "RemoveContainer" containerID="231c3a78d5e5192583eae0025515fdfb5c39802fa1bd56f5a143be367366351a" Jan 27 19:15:22 crc kubenswrapper[4853]: I0127 19:15:22.946625 4853 scope.go:117] "RemoveContainer" containerID="abf47df06e92bd2b2bf084a93c5544cdcc272b5d643fde299727fe534d55e50f" Jan 27 19:15:31 crc kubenswrapper[4853]: E0127 19:15:31.322323 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb5fe03_6098_4e03_ab85_5a28e090f13c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 19:15:35 crc kubenswrapper[4853]: I0127 19:15:35.541699 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:15:35 crc kubenswrapper[4853]: I0127 19:15:35.542358 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:15:48 crc kubenswrapper[4853]: I0127 19:15:48.848485 4853 generic.go:334] "Generic (PLEG): container finished" podID="a8e00930-5920-4f3f-9f05-62da3fdcdd88" containerID="4df0a7c24e359b941bfeb8c9f248f2254b41b29cd92a6b43d82ba0a1a0747653" exitCode=0 Jan 27 19:15:48 crc kubenswrapper[4853]: I0127 19:15:48.848586 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" event={"ID":"a8e00930-5920-4f3f-9f05-62da3fdcdd88","Type":"ContainerDied","Data":"4df0a7c24e359b941bfeb8c9f248f2254b41b29cd92a6b43d82ba0a1a0747653"} Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.318800 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.509746 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovncontroller-config-0\") pod \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.509877 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-inventory\") pod \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.509932 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwtnq\" (UniqueName: \"kubernetes.io/projected/a8e00930-5920-4f3f-9f05-62da3fdcdd88-kube-api-access-pwtnq\") pod \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.509970 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovn-combined-ca-bundle\") pod \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.510051 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ssh-key-openstack-edpm-ipam\") pod \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\" (UID: \"a8e00930-5920-4f3f-9f05-62da3fdcdd88\") " Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.520357 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a8e00930-5920-4f3f-9f05-62da3fdcdd88" (UID: "a8e00930-5920-4f3f-9f05-62da3fdcdd88"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.520672 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e00930-5920-4f3f-9f05-62da3fdcdd88-kube-api-access-pwtnq" (OuterVolumeSpecName: "kube-api-access-pwtnq") pod "a8e00930-5920-4f3f-9f05-62da3fdcdd88" (UID: "a8e00930-5920-4f3f-9f05-62da3fdcdd88"). InnerVolumeSpecName "kube-api-access-pwtnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.545800 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a8e00930-5920-4f3f-9f05-62da3fdcdd88" (UID: "a8e00930-5920-4f3f-9f05-62da3fdcdd88"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.548711 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-inventory" (OuterVolumeSpecName: "inventory") pod "a8e00930-5920-4f3f-9f05-62da3fdcdd88" (UID: "a8e00930-5920-4f3f-9f05-62da3fdcdd88"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.552064 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8e00930-5920-4f3f-9f05-62da3fdcdd88" (UID: "a8e00930-5920-4f3f-9f05-62da3fdcdd88"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.612761 4853 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.612820 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.612837 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwtnq\" (UniqueName: \"kubernetes.io/projected/a8e00930-5920-4f3f-9f05-62da3fdcdd88-kube-api-access-pwtnq\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.612848 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.612858 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8e00930-5920-4f3f-9f05-62da3fdcdd88-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.926919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" event={"ID":"a8e00930-5920-4f3f-9f05-62da3fdcdd88","Type":"ContainerDied","Data":"549584e85f58c8b015c41c19e74290f5fb63dc7959b4e4c46d0bf1b41eeb72a6"} Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.926983 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-49xvk" Jan 27 19:15:50 crc kubenswrapper[4853]: I0127 19:15:50.926996 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549584e85f58c8b015c41c19e74290f5fb63dc7959b4e4c46d0bf1b41eeb72a6" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.052824 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk"] Jan 27 19:15:51 crc kubenswrapper[4853]: E0127 19:15:51.053536 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d7699f-899e-4206-bf04-a8c097c52d9e" containerName="collect-profiles" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.053572 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d7699f-899e-4206-bf04-a8c097c52d9e" containerName="collect-profiles" Jan 27 19:15:51 crc kubenswrapper[4853]: E0127 19:15:51.053636 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e00930-5920-4f3f-9f05-62da3fdcdd88" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.053656 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e00930-5920-4f3f-9f05-62da3fdcdd88" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.054080 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e00930-5920-4f3f-9f05-62da3fdcdd88" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.054153 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d7699f-899e-4206-bf04-a8c097c52d9e" containerName="collect-profiles" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.055494 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.058034 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.058447 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.059314 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.059836 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.060160 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.065595 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk"] Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.262412 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.265484 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.265653 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.265707 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxhh\" (UniqueName: \"kubernetes.io/projected/149036fd-39f5-4bd0-a585-f495af3a55d1-kube-api-access-jqxhh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.266254 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.266605 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.266940 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.367951 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.368019 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.368063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.368093 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxhh\" (UniqueName: \"kubernetes.io/projected/149036fd-39f5-4bd0-a585-f495af3a55d1-kube-api-access-jqxhh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.368171 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.368212 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.373535 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.374780 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.374952 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.375876 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.376798 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.388092 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxhh\" (UniqueName: \"kubernetes.io/projected/149036fd-39f5-4bd0-a585-f495af3a55d1-kube-api-access-jqxhh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:51 crc kubenswrapper[4853]: I0127 19:15:51.578867 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:15:52 crc kubenswrapper[4853]: I0127 19:15:52.091641 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk"] Jan 27 19:15:52 crc kubenswrapper[4853]: I0127 19:15:52.098895 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:15:52 crc kubenswrapper[4853]: I0127 19:15:52.953540 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" event={"ID":"149036fd-39f5-4bd0-a585-f495af3a55d1","Type":"ContainerStarted","Data":"88e59605d63ec45f71eecefa41daaa98146d35494019a6f9d6466640fea14f83"} Jan 27 19:15:52 crc kubenswrapper[4853]: I0127 19:15:52.954087 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" event={"ID":"149036fd-39f5-4bd0-a585-f495af3a55d1","Type":"ContainerStarted","Data":"3619b4e4a9c4fee4c40fa27577af50d47f6505ae0451f930888d2118bc2aa5cc"} Jan 27 19:15:52 crc kubenswrapper[4853]: I0127 19:15:52.991832 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" podStartSLOduration=1.590338335 podStartE2EDuration="1.991801549s" podCreationTimestamp="2026-01-27 19:15:51 +0000 UTC" firstStartedPulling="2026-01-27 19:15:52.098678354 +0000 UTC m=+1994.561221237" lastFinishedPulling="2026-01-27 19:15:52.500141568 +0000 UTC m=+1994.962684451" observedRunningTime="2026-01-27 19:15:52.984224403 +0000 UTC m=+1995.446767286" watchObservedRunningTime="2026-01-27 19:15:52.991801549 +0000 UTC m=+1995.454344432" Jan 27 19:16:05 crc kubenswrapper[4853]: I0127 19:16:05.541755 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:16:05 crc kubenswrapper[4853]: I0127 19:16:05.542384 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:16:35 crc kubenswrapper[4853]: I0127 19:16:35.541334 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:16:35 crc kubenswrapper[4853]: I0127 19:16:35.541928 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:16:35 crc kubenswrapper[4853]: I0127 19:16:35.541980 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:16:35 crc kubenswrapper[4853]: I0127 19:16:35.542866 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9860de66bebfe136e22c65a35152f3d07b6273ddbf8d7d2c458f3a87d024b6f1"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:16:35 crc kubenswrapper[4853]: I0127 19:16:35.542954 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://9860de66bebfe136e22c65a35152f3d07b6273ddbf8d7d2c458f3a87d024b6f1" gracePeriod=600 Jan 27 19:16:36 crc kubenswrapper[4853]: I0127 19:16:36.378374 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="9860de66bebfe136e22c65a35152f3d07b6273ddbf8d7d2c458f3a87d024b6f1" exitCode=0 Jan 27 19:16:36 crc kubenswrapper[4853]: I0127 19:16:36.378421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"9860de66bebfe136e22c65a35152f3d07b6273ddbf8d7d2c458f3a87d024b6f1"} Jan 27 19:16:36 crc kubenswrapper[4853]: I0127 19:16:36.379252 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158"} Jan 27 19:16:36 crc kubenswrapper[4853]: I0127 19:16:36.379301 4853 scope.go:117] "RemoveContainer" containerID="9543b1260c6f66dc5f3c337d6a1dde47109f864fb66397b4f5b0356952eb44b1" Jan 27 19:16:45 crc kubenswrapper[4853]: I0127 19:16:45.487782 4853 generic.go:334] "Generic (PLEG): container finished" podID="149036fd-39f5-4bd0-a585-f495af3a55d1" containerID="88e59605d63ec45f71eecefa41daaa98146d35494019a6f9d6466640fea14f83" exitCode=0 Jan 27 19:16:45 crc kubenswrapper[4853]: I0127 19:16:45.488454 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" event={"ID":"149036fd-39f5-4bd0-a585-f495af3a55d1","Type":"ContainerDied","Data":"88e59605d63ec45f71eecefa41daaa98146d35494019a6f9d6466640fea14f83"} Jan 27 19:16:46 crc kubenswrapper[4853]: I0127 19:16:46.948045 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.003772 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"149036fd-39f5-4bd0-a585-f495af3a55d1\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.003851 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-ssh-key-openstack-edpm-ipam\") pod \"149036fd-39f5-4bd0-a585-f495af3a55d1\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.003903 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-metadata-combined-ca-bundle\") pod \"149036fd-39f5-4bd0-a585-f495af3a55d1\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.004102 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-inventory\") pod \"149036fd-39f5-4bd0-a585-f495af3a55d1\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.004190 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqxhh\" (UniqueName: \"kubernetes.io/projected/149036fd-39f5-4bd0-a585-f495af3a55d1-kube-api-access-jqxhh\") pod \"149036fd-39f5-4bd0-a585-f495af3a55d1\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.004401 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-nova-metadata-neutron-config-0\") pod \"149036fd-39f5-4bd0-a585-f495af3a55d1\" (UID: \"149036fd-39f5-4bd0-a585-f495af3a55d1\") " Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.011550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "149036fd-39f5-4bd0-a585-f495af3a55d1" (UID: "149036fd-39f5-4bd0-a585-f495af3a55d1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.012251 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149036fd-39f5-4bd0-a585-f495af3a55d1-kube-api-access-jqxhh" (OuterVolumeSpecName: "kube-api-access-jqxhh") pod "149036fd-39f5-4bd0-a585-f495af3a55d1" (UID: "149036fd-39f5-4bd0-a585-f495af3a55d1"). InnerVolumeSpecName "kube-api-access-jqxhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.034391 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-inventory" (OuterVolumeSpecName: "inventory") pod "149036fd-39f5-4bd0-a585-f495af3a55d1" (UID: "149036fd-39f5-4bd0-a585-f495af3a55d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.038917 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "149036fd-39f5-4bd0-a585-f495af3a55d1" (UID: "149036fd-39f5-4bd0-a585-f495af3a55d1"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.045704 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "149036fd-39f5-4bd0-a585-f495af3a55d1" (UID: "149036fd-39f5-4bd0-a585-f495af3a55d1"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.050531 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "149036fd-39f5-4bd0-a585-f495af3a55d1" (UID: "149036fd-39f5-4bd0-a585-f495af3a55d1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.109101 4853 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.109155 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.109167 4853 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.109252 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.109279 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqxhh\" (UniqueName: \"kubernetes.io/projected/149036fd-39f5-4bd0-a585-f495af3a55d1-kube-api-access-jqxhh\") on node \"crc\" DevicePath \"\"" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.109292 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/149036fd-39f5-4bd0-a585-f495af3a55d1-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.511653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" event={"ID":"149036fd-39f5-4bd0-a585-f495af3a55d1","Type":"ContainerDied","Data":"3619b4e4a9c4fee4c40fa27577af50d47f6505ae0451f930888d2118bc2aa5cc"} Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.511703 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3619b4e4a9c4fee4c40fa27577af50d47f6505ae0451f930888d2118bc2aa5cc" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.511776 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.652880 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9"] Jan 27 19:16:47 crc kubenswrapper[4853]: E0127 19:16:47.653472 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149036fd-39f5-4bd0-a585-f495af3a55d1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.653493 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="149036fd-39f5-4bd0-a585-f495af3a55d1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.653699 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="149036fd-39f5-4bd0-a585-f495af3a55d1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.654421 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.656892 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.656903 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.657210 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.658197 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.658412 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.672981 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9"] Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.722478 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.722553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.722892 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jll49\" (UniqueName: \"kubernetes.io/projected/91e90160-3a76-416b-a3e6-cf5d105f892d-kube-api-access-jll49\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.723080 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.723192 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.825909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.825989 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.826140 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.826196 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.826267 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jll49\" (UniqueName: \"kubernetes.io/projected/91e90160-3a76-416b-a3e6-cf5d105f892d-kube-api-access-jll49\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.832973 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.833114 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.835008 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.835200 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.847094 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jll49\" (UniqueName: \"kubernetes.io/projected/91e90160-3a76-416b-a3e6-cf5d105f892d-kube-api-access-jll49\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:47 crc kubenswrapper[4853]: I0127 19:16:47.993230 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:16:48 crc kubenswrapper[4853]: W0127 19:16:48.632503 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91e90160_3a76_416b_a3e6_cf5d105f892d.slice/crio-ebe39af7ef5e6ee3aaf702488b0bb903a8107ed8fbf1118e97380ad8cc7eaf46 WatchSource:0}: Error finding container ebe39af7ef5e6ee3aaf702488b0bb903a8107ed8fbf1118e97380ad8cc7eaf46: Status 404 returned error can't find the container with id ebe39af7ef5e6ee3aaf702488b0bb903a8107ed8fbf1118e97380ad8cc7eaf46 Jan 27 19:16:48 crc kubenswrapper[4853]: I0127 19:16:48.633971 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9"] Jan 27 19:16:49 crc kubenswrapper[4853]: I0127 19:16:49.529938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" event={"ID":"91e90160-3a76-416b-a3e6-cf5d105f892d","Type":"ContainerStarted","Data":"4f46879679eea66c2c2628a5eb5c6f4af745d588373cae0ad4518dea1d2f3791"} Jan 27 19:16:49 crc kubenswrapper[4853]: I0127 19:16:49.530256 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" event={"ID":"91e90160-3a76-416b-a3e6-cf5d105f892d","Type":"ContainerStarted","Data":"ebe39af7ef5e6ee3aaf702488b0bb903a8107ed8fbf1118e97380ad8cc7eaf46"} Jan 27 19:16:49 crc kubenswrapper[4853]: I0127 19:16:49.562683 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" podStartSLOduration=2.080628464 podStartE2EDuration="2.562662191s" podCreationTimestamp="2026-01-27 19:16:47 +0000 UTC" firstStartedPulling="2026-01-27 19:16:48.634948128 +0000 UTC m=+2051.097491011" lastFinishedPulling="2026-01-27 19:16:49.116981855 +0000 UTC m=+2051.579524738" observedRunningTime="2026-01-27 19:16:49.557090992 +0000 UTC m=+2052.019633875" watchObservedRunningTime="2026-01-27 19:16:49.562662191 +0000 UTC m=+2052.025205074" Jan 27 19:18:35 crc kubenswrapper[4853]: I0127 19:18:35.541618 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:18:35 crc kubenswrapper[4853]: I0127 19:18:35.542221 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:19:05 crc kubenswrapper[4853]: I0127 19:19:05.541748 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:19:05 crc kubenswrapper[4853]: I0127 19:19:05.542285 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:19:35 crc kubenswrapper[4853]: I0127 19:19:35.541260 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:19:35 crc kubenswrapper[4853]: I0127 19:19:35.541865 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:19:35 crc kubenswrapper[4853]: I0127 19:19:35.541918 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:19:35 crc kubenswrapper[4853]: I0127 19:19:35.542774 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:19:35 crc kubenswrapper[4853]: I0127 19:19:35.542833 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" gracePeriod=600 Jan 27 19:19:35 crc kubenswrapper[4853]: E0127 19:19:35.675523 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:19:36 crc kubenswrapper[4853]: I0127 19:19:36.506020 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" exitCode=0 Jan 27 19:19:36 crc kubenswrapper[4853]: I0127 19:19:36.506084 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158"} Jan 27 19:19:36 crc kubenswrapper[4853]: I0127 19:19:36.506156 4853 scope.go:117] "RemoveContainer" containerID="9860de66bebfe136e22c65a35152f3d07b6273ddbf8d7d2c458f3a87d024b6f1" Jan 27 19:19:36 crc kubenswrapper[4853]: I0127 19:19:36.506897 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:19:36 crc kubenswrapper[4853]: E0127 19:19:36.507214 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.326033 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6lw4h"] Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.329397 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.340700 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lw4h"] Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.414432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gqm\" (UniqueName: \"kubernetes.io/projected/e3d0232d-7675-4e7a-a916-e855f575e899-kube-api-access-h2gqm\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.414760 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-utilities\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.414876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-catalog-content\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.517221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-utilities\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.517314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-catalog-content\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.517873 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-utilities\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.518255 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-catalog-content\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.518381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2gqm\" (UniqueName: \"kubernetes.io/projected/e3d0232d-7675-4e7a-a916-e855f575e899-kube-api-access-h2gqm\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.539901 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2gqm\" (UniqueName: \"kubernetes.io/projected/e3d0232d-7675-4e7a-a916-e855f575e899-kube-api-access-h2gqm\") pod \"redhat-marketplace-6lw4h\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:38 crc kubenswrapper[4853]: I0127 19:19:38.702060 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:39 crc kubenswrapper[4853]: I0127 19:19:39.164201 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lw4h"] Jan 27 19:19:39 crc kubenswrapper[4853]: I0127 19:19:39.536371 4853 generic.go:334] "Generic (PLEG): container finished" podID="e3d0232d-7675-4e7a-a916-e855f575e899" containerID="32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955" exitCode=0 Jan 27 19:19:39 crc kubenswrapper[4853]: I0127 19:19:39.536492 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lw4h" event={"ID":"e3d0232d-7675-4e7a-a916-e855f575e899","Type":"ContainerDied","Data":"32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955"} Jan 27 19:19:39 crc kubenswrapper[4853]: I0127 19:19:39.536883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lw4h" event={"ID":"e3d0232d-7675-4e7a-a916-e855f575e899","Type":"ContainerStarted","Data":"bc5a02fac564c967ab5cc3dad2f3b793de2575859b7bdae0e1d324b14e2b32cc"} Jan 27 19:19:40 crc kubenswrapper[4853]: I0127 19:19:40.547937 4853 generic.go:334] "Generic (PLEG): container finished" podID="e3d0232d-7675-4e7a-a916-e855f575e899" containerID="3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a" exitCode=0 Jan 27 19:19:40 crc kubenswrapper[4853]: I0127 19:19:40.548095 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lw4h" event={"ID":"e3d0232d-7675-4e7a-a916-e855f575e899","Type":"ContainerDied","Data":"3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a"} Jan 27 19:19:41 crc kubenswrapper[4853]: I0127 19:19:41.562527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lw4h" event={"ID":"e3d0232d-7675-4e7a-a916-e855f575e899","Type":"ContainerStarted","Data":"ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9"} Jan 27 19:19:41 crc kubenswrapper[4853]: I0127 19:19:41.584426 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6lw4h" podStartSLOduration=2.106695133 podStartE2EDuration="3.584399265s" podCreationTimestamp="2026-01-27 19:19:38 +0000 UTC" firstStartedPulling="2026-01-27 19:19:39.538202971 +0000 UTC m=+2222.000745854" lastFinishedPulling="2026-01-27 19:19:41.015907103 +0000 UTC m=+2223.478449986" observedRunningTime="2026-01-27 19:19:41.579930787 +0000 UTC m=+2224.042473670" watchObservedRunningTime="2026-01-27 19:19:41.584399265 +0000 UTC m=+2224.046942148" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.525369 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gr7ww"] Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.527996 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.537460 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr7ww"] Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.603374 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkcn\" (UniqueName: \"kubernetes.io/projected/d00ebc83-759a-425b-b12a-acc53351bf91-kube-api-access-5rkcn\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.604241 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-catalog-content\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.604504 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-utilities\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.706077 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-utilities\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.706316 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rkcn\" (UniqueName: \"kubernetes.io/projected/d00ebc83-759a-425b-b12a-acc53351bf91-kube-api-access-5rkcn\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.706467 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-catalog-content\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.706573 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-utilities\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.706836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-catalog-content\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.735143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rkcn\" (UniqueName: \"kubernetes.io/projected/d00ebc83-759a-425b-b12a-acc53351bf91-kube-api-access-5rkcn\") pod \"community-operators-gr7ww\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:42 crc kubenswrapper[4853]: I0127 19:19:42.871664 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:43 crc kubenswrapper[4853]: W0127 19:19:43.451658 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd00ebc83_759a_425b_b12a_acc53351bf91.slice/crio-419bc830393d0a49ad369bc53a48a872aca33015836b84edd0283bac2d65a509 WatchSource:0}: Error finding container 419bc830393d0a49ad369bc53a48a872aca33015836b84edd0283bac2d65a509: Status 404 returned error can't find the container with id 419bc830393d0a49ad369bc53a48a872aca33015836b84edd0283bac2d65a509 Jan 27 19:19:43 crc kubenswrapper[4853]: I0127 19:19:43.452199 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr7ww"] Jan 27 19:19:43 crc kubenswrapper[4853]: I0127 19:19:43.584873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerStarted","Data":"419bc830393d0a49ad369bc53a48a872aca33015836b84edd0283bac2d65a509"} Jan 27 19:19:44 crc kubenswrapper[4853]: I0127 19:19:44.594217 4853 generic.go:334] "Generic (PLEG): container finished" podID="d00ebc83-759a-425b-b12a-acc53351bf91" containerID="cf6fcd3ffd222e5a82e041b60cfb22dc2b3c322ad50c013b29c69a66868d3804" exitCode=0 Jan 27 19:19:44 crc kubenswrapper[4853]: I0127 19:19:44.594267 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerDied","Data":"cf6fcd3ffd222e5a82e041b60cfb22dc2b3c322ad50c013b29c69a66868d3804"} Jan 27 19:19:45 crc kubenswrapper[4853]: I0127 19:19:45.609314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerStarted","Data":"e3712b50428dc3d43feb197eb9333f11571a2d116fef16b33ce7aee713543088"} Jan 27 19:19:46 crc kubenswrapper[4853]: I0127 19:19:46.643602 4853 generic.go:334] "Generic (PLEG): container finished" podID="d00ebc83-759a-425b-b12a-acc53351bf91" containerID="e3712b50428dc3d43feb197eb9333f11571a2d116fef16b33ce7aee713543088" exitCode=0 Jan 27 19:19:46 crc kubenswrapper[4853]: I0127 19:19:46.643709 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerDied","Data":"e3712b50428dc3d43feb197eb9333f11571a2d116fef16b33ce7aee713543088"} Jan 27 19:19:47 crc kubenswrapper[4853]: I0127 19:19:47.674992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerStarted","Data":"becf3ea9bbf714a0d93146a2a9d750464cf9b33c72347182549acede2b140cb9"} Jan 27 19:19:47 crc kubenswrapper[4853]: I0127 19:19:47.709720 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gr7ww" podStartSLOduration=3.037031505 podStartE2EDuration="5.709693269s" podCreationTimestamp="2026-01-27 19:19:42 +0000 UTC" firstStartedPulling="2026-01-27 19:19:44.595906411 +0000 UTC m=+2227.058449294" lastFinishedPulling="2026-01-27 19:19:47.268568135 +0000 UTC m=+2229.731111058" observedRunningTime="2026-01-27 19:19:47.698591452 +0000 UTC m=+2230.161134345" watchObservedRunningTime="2026-01-27 19:19:47.709693269 +0000 UTC m=+2230.172236152" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.522713 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bd2rg"] Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.530760 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.542614 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bd2rg"] Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.645825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-utilities\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.645950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzd42\" (UniqueName: \"kubernetes.io/projected/f8573bba-7981-4728-8763-283eb98e9332-kube-api-access-wzd42\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.646011 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-catalog-content\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.703234 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.703645 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.748149 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-utilities\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.748269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzd42\" (UniqueName: \"kubernetes.io/projected/f8573bba-7981-4728-8763-283eb98e9332-kube-api-access-wzd42\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.748334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-catalog-content\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.748901 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-utilities\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.748977 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-catalog-content\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.757166 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.777523 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzd42\" (UniqueName: \"kubernetes.io/projected/f8573bba-7981-4728-8763-283eb98e9332-kube-api-access-wzd42\") pod \"certified-operators-bd2rg\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:48 crc kubenswrapper[4853]: I0127 19:19:48.866146 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:49 crc kubenswrapper[4853]: I0127 19:19:49.485618 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bd2rg"] Jan 27 19:19:49 crc kubenswrapper[4853]: W0127 19:19:49.503681 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8573bba_7981_4728_8763_283eb98e9332.slice/crio-77af47fcda8d69e72ecf4569c1ed5ac9640334861765a32f63c9d9f129cd0337 WatchSource:0}: Error finding container 77af47fcda8d69e72ecf4569c1ed5ac9640334861765a32f63c9d9f129cd0337: Status 404 returned error can't find the container with id 77af47fcda8d69e72ecf4569c1ed5ac9640334861765a32f63c9d9f129cd0337 Jan 27 19:19:49 crc kubenswrapper[4853]: I0127 19:19:49.696192 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerStarted","Data":"77af47fcda8d69e72ecf4569c1ed5ac9640334861765a32f63c9d9f129cd0337"} Jan 27 19:19:49 crc kubenswrapper[4853]: I0127 19:19:49.748661 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:50 crc kubenswrapper[4853]: I0127 19:19:50.707540 4853 generic.go:334] "Generic (PLEG): container finished" podID="f8573bba-7981-4728-8763-283eb98e9332" containerID="5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea" exitCode=0 Jan 27 19:19:50 crc kubenswrapper[4853]: I0127 19:19:50.707945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerDied","Data":"5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea"} Jan 27 19:19:51 crc kubenswrapper[4853]: I0127 19:19:51.113004 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:19:51 crc kubenswrapper[4853]: E0127 19:19:51.113415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:19:51 crc kubenswrapper[4853]: I0127 19:19:51.505497 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lw4h"] Jan 27 19:19:51 crc kubenswrapper[4853]: I0127 19:19:51.725979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerStarted","Data":"bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9"} Jan 27 19:19:52 crc kubenswrapper[4853]: I0127 19:19:52.736789 4853 generic.go:334] "Generic (PLEG): container finished" podID="f8573bba-7981-4728-8763-283eb98e9332" containerID="bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9" exitCode=0 Jan 27 19:19:52 crc kubenswrapper[4853]: I0127 19:19:52.736969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerDied","Data":"bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9"} Jan 27 19:19:52 crc kubenswrapper[4853]: I0127 19:19:52.740036 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6lw4h" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="registry-server" containerID="cri-o://ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9" gracePeriod=2 Jan 27 19:19:52 crc kubenswrapper[4853]: I0127 19:19:52.871860 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:52 crc kubenswrapper[4853]: I0127 19:19:52.871931 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:52 crc kubenswrapper[4853]: I0127 19:19:52.930949 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.255260 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.369212 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2gqm\" (UniqueName: \"kubernetes.io/projected/e3d0232d-7675-4e7a-a916-e855f575e899-kube-api-access-h2gqm\") pod \"e3d0232d-7675-4e7a-a916-e855f575e899\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.369335 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-catalog-content\") pod \"e3d0232d-7675-4e7a-a916-e855f575e899\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.369468 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-utilities\") pod \"e3d0232d-7675-4e7a-a916-e855f575e899\" (UID: \"e3d0232d-7675-4e7a-a916-e855f575e899\") " Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.370577 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-utilities" (OuterVolumeSpecName: "utilities") pod "e3d0232d-7675-4e7a-a916-e855f575e899" (UID: "e3d0232d-7675-4e7a-a916-e855f575e899"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.378845 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d0232d-7675-4e7a-a916-e855f575e899-kube-api-access-h2gqm" (OuterVolumeSpecName: "kube-api-access-h2gqm") pod "e3d0232d-7675-4e7a-a916-e855f575e899" (UID: "e3d0232d-7675-4e7a-a916-e855f575e899"). InnerVolumeSpecName "kube-api-access-h2gqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.392467 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3d0232d-7675-4e7a-a916-e855f575e899" (UID: "e3d0232d-7675-4e7a-a916-e855f575e899"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.472545 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2gqm\" (UniqueName: \"kubernetes.io/projected/e3d0232d-7675-4e7a-a916-e855f575e899-kube-api-access-h2gqm\") on node \"crc\" DevicePath \"\"" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.472585 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.472594 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d0232d-7675-4e7a-a916-e855f575e899-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.751244 4853 generic.go:334] "Generic (PLEG): container finished" podID="e3d0232d-7675-4e7a-a916-e855f575e899" containerID="ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9" exitCode=0 Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.751317 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lw4h" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.751351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lw4h" event={"ID":"e3d0232d-7675-4e7a-a916-e855f575e899","Type":"ContainerDied","Data":"ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9"} Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.751852 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lw4h" event={"ID":"e3d0232d-7675-4e7a-a916-e855f575e899","Type":"ContainerDied","Data":"bc5a02fac564c967ab5cc3dad2f3b793de2575859b7bdae0e1d324b14e2b32cc"} Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.751889 4853 scope.go:117] "RemoveContainer" containerID="ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.755157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerStarted","Data":"40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a"} Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.773892 4853 scope.go:117] "RemoveContainer" containerID="3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.789045 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bd2rg" podStartSLOduration=3.381926662 podStartE2EDuration="5.789026658s" podCreationTimestamp="2026-01-27 19:19:48 +0000 UTC" firstStartedPulling="2026-01-27 19:19:50.710347264 +0000 UTC m=+2233.172890147" lastFinishedPulling="2026-01-27 19:19:53.11744726 +0000 UTC m=+2235.579990143" observedRunningTime="2026-01-27 19:19:53.78349075 +0000 UTC m=+2236.246033643" watchObservedRunningTime="2026-01-27 19:19:53.789026658 +0000 UTC m=+2236.251569541" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.810805 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lw4h"] Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.814745 4853 scope.go:117] "RemoveContainer" containerID="32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.820109 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lw4h"] Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.823560 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.842773 4853 scope.go:117] "RemoveContainer" containerID="ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9" Jan 27 19:19:53 crc kubenswrapper[4853]: E0127 19:19:53.843459 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9\": container with ID starting with ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9 not found: ID does not exist" containerID="ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.843496 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9"} err="failed to get container status \"ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9\": rpc error: code = NotFound desc = could not find container \"ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9\": container with ID starting with ff2cafb6bb84b407802ebc3dc24dcc3f392bc0c54c8d76758754690fe82da8c9 not found: ID does not exist" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.843540 4853 scope.go:117] "RemoveContainer" containerID="3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a" Jan 27 19:19:53 crc kubenswrapper[4853]: E0127 19:19:53.851513 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a\": container with ID starting with 3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a not found: ID does not exist" containerID="3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.851599 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a"} err="failed to get container status \"3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a\": rpc error: code = NotFound desc = could not find container \"3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a\": container with ID starting with 3214bd490c0980d3377c1ef62d60abc869aa95f0d30902c688960812e437053a not found: ID does not exist" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.851643 4853 scope.go:117] "RemoveContainer" containerID="32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955" Jan 27 19:19:53 crc kubenswrapper[4853]: E0127 19:19:53.852368 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955\": container with ID starting with 32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955 not found: ID does not exist" containerID="32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955" Jan 27 19:19:53 crc kubenswrapper[4853]: I0127 19:19:53.852415 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955"} err="failed to get container status \"32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955\": rpc error: code = NotFound desc = could not find container \"32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955\": container with ID starting with 32cd134050f04253914b9e8aeb9697bbc80df4311cb03315d75ed05b08d7e955 not found: ID does not exist" Jan 27 19:19:54 crc kubenswrapper[4853]: I0127 19:19:54.127088 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" path="/var/lib/kubelet/pods/e3d0232d-7675-4e7a-a916-e855f575e899/volumes" Jan 27 19:19:55 crc kubenswrapper[4853]: I0127 19:19:55.908555 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gr7ww"] Jan 27 19:19:55 crc kubenswrapper[4853]: I0127 19:19:55.908944 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gr7ww" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="registry-server" containerID="cri-o://becf3ea9bbf714a0d93146a2a9d750464cf9b33c72347182549acede2b140cb9" gracePeriod=2 Jan 27 19:19:56 crc kubenswrapper[4853]: I0127 19:19:56.803427 4853 generic.go:334] "Generic (PLEG): container finished" podID="d00ebc83-759a-425b-b12a-acc53351bf91" containerID="becf3ea9bbf714a0d93146a2a9d750464cf9b33c72347182549acede2b140cb9" exitCode=0 Jan 27 19:19:56 crc kubenswrapper[4853]: I0127 19:19:56.803530 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerDied","Data":"becf3ea9bbf714a0d93146a2a9d750464cf9b33c72347182549acede2b140cb9"} Jan 27 19:19:56 crc kubenswrapper[4853]: I0127 19:19:56.898834 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.054882 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-utilities\") pod \"d00ebc83-759a-425b-b12a-acc53351bf91\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.054961 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rkcn\" (UniqueName: \"kubernetes.io/projected/d00ebc83-759a-425b-b12a-acc53351bf91-kube-api-access-5rkcn\") pod \"d00ebc83-759a-425b-b12a-acc53351bf91\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.055262 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-catalog-content\") pod \"d00ebc83-759a-425b-b12a-acc53351bf91\" (UID: \"d00ebc83-759a-425b-b12a-acc53351bf91\") " Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.055792 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-utilities" (OuterVolumeSpecName: "utilities") pod "d00ebc83-759a-425b-b12a-acc53351bf91" (UID: "d00ebc83-759a-425b-b12a-acc53351bf91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.056018 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.073009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00ebc83-759a-425b-b12a-acc53351bf91-kube-api-access-5rkcn" (OuterVolumeSpecName: "kube-api-access-5rkcn") pod "d00ebc83-759a-425b-b12a-acc53351bf91" (UID: "d00ebc83-759a-425b-b12a-acc53351bf91"). InnerVolumeSpecName "kube-api-access-5rkcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.105750 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d00ebc83-759a-425b-b12a-acc53351bf91" (UID: "d00ebc83-759a-425b-b12a-acc53351bf91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.158515 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rkcn\" (UniqueName: \"kubernetes.io/projected/d00ebc83-759a-425b-b12a-acc53351bf91-kube-api-access-5rkcn\") on node \"crc\" DevicePath \"\"" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.158565 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00ebc83-759a-425b-b12a-acc53351bf91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.819733 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr7ww" event={"ID":"d00ebc83-759a-425b-b12a-acc53351bf91","Type":"ContainerDied","Data":"419bc830393d0a49ad369bc53a48a872aca33015836b84edd0283bac2d65a509"} Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.820303 4853 scope.go:117] "RemoveContainer" containerID="becf3ea9bbf714a0d93146a2a9d750464cf9b33c72347182549acede2b140cb9" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.819820 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr7ww" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.842166 4853 scope.go:117] "RemoveContainer" containerID="e3712b50428dc3d43feb197eb9333f11571a2d116fef16b33ce7aee713543088" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.864677 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gr7ww"] Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.875143 4853 scope.go:117] "RemoveContainer" containerID="cf6fcd3ffd222e5a82e041b60cfb22dc2b3c322ad50c013b29c69a66868d3804" Jan 27 19:19:57 crc kubenswrapper[4853]: I0127 19:19:57.875681 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gr7ww"] Jan 27 19:19:58 crc kubenswrapper[4853]: I0127 19:19:58.127048 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" path="/var/lib/kubelet/pods/d00ebc83-759a-425b-b12a-acc53351bf91/volumes" Jan 27 19:19:58 crc kubenswrapper[4853]: I0127 19:19:58.867837 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:58 crc kubenswrapper[4853]: I0127 19:19:58.867898 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:58 crc kubenswrapper[4853]: I0127 19:19:58.922991 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:19:59 crc kubenswrapper[4853]: I0127 19:19:59.894310 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:20:00 crc kubenswrapper[4853]: I0127 19:20:00.504560 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bd2rg"] Jan 27 19:20:01 crc kubenswrapper[4853]: I0127 19:20:01.858682 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bd2rg" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="registry-server" containerID="cri-o://40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a" gracePeriod=2 Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.477762 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.588981 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzd42\" (UniqueName: \"kubernetes.io/projected/f8573bba-7981-4728-8763-283eb98e9332-kube-api-access-wzd42\") pod \"f8573bba-7981-4728-8763-283eb98e9332\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.589036 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-catalog-content\") pod \"f8573bba-7981-4728-8763-283eb98e9332\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.589182 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-utilities\") pod \"f8573bba-7981-4728-8763-283eb98e9332\" (UID: \"f8573bba-7981-4728-8763-283eb98e9332\") " Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.590324 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-utilities" (OuterVolumeSpecName: "utilities") pod "f8573bba-7981-4728-8763-283eb98e9332" (UID: "f8573bba-7981-4728-8763-283eb98e9332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.609708 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8573bba-7981-4728-8763-283eb98e9332-kube-api-access-wzd42" (OuterVolumeSpecName: "kube-api-access-wzd42") pod "f8573bba-7981-4728-8763-283eb98e9332" (UID: "f8573bba-7981-4728-8763-283eb98e9332"). InnerVolumeSpecName "kube-api-access-wzd42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.645669 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8573bba-7981-4728-8763-283eb98e9332" (UID: "f8573bba-7981-4728-8763-283eb98e9332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.691477 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzd42\" (UniqueName: \"kubernetes.io/projected/f8573bba-7981-4728-8763-283eb98e9332-kube-api-access-wzd42\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.691511 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.691523 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8573bba-7981-4728-8763-283eb98e9332-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.871928 4853 generic.go:334] "Generic (PLEG): container finished" podID="f8573bba-7981-4728-8763-283eb98e9332" containerID="40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a" exitCode=0 Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.871965 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerDied","Data":"40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a"} Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.872043 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bd2rg" event={"ID":"f8573bba-7981-4728-8763-283eb98e9332","Type":"ContainerDied","Data":"77af47fcda8d69e72ecf4569c1ed5ac9640334861765a32f63c9d9f129cd0337"} Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.872069 4853 scope.go:117] "RemoveContainer" containerID="40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.872003 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bd2rg" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.920729 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bd2rg"] Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.924212 4853 scope.go:117] "RemoveContainer" containerID="bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9" Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.931504 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bd2rg"] Jan 27 19:20:02 crc kubenswrapper[4853]: I0127 19:20:02.968529 4853 scope.go:117] "RemoveContainer" containerID="5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea" Jan 27 19:20:03 crc kubenswrapper[4853]: I0127 19:20:03.032215 4853 scope.go:117] "RemoveContainer" containerID="40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a" Jan 27 19:20:03 crc kubenswrapper[4853]: E0127 19:20:03.032745 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a\": container with ID starting with 40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a not found: ID does not exist" containerID="40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a" Jan 27 19:20:03 crc kubenswrapper[4853]: I0127 19:20:03.032790 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a"} err="failed to get container status \"40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a\": rpc error: code = NotFound desc = could not find container \"40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a\": container with ID starting with 40264d29b99f14af252faf111486458f34750b3c376b0cd951cc0b5823547f0a not found: ID does not exist" Jan 27 19:20:03 crc kubenswrapper[4853]: I0127 19:20:03.032831 4853 scope.go:117] "RemoveContainer" containerID="bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9" Jan 27 19:20:03 crc kubenswrapper[4853]: E0127 19:20:03.033163 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9\": container with ID starting with bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9 not found: ID does not exist" containerID="bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9" Jan 27 19:20:03 crc kubenswrapper[4853]: I0127 19:20:03.033194 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9"} err="failed to get container status \"bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9\": rpc error: code = NotFound desc = could not find container \"bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9\": container with ID starting with bf3e11549e8c87dcfeda1ebc7f54047cf8590a0be5b33a841e60f0fcda8f80e9 not found: ID does not exist" Jan 27 19:20:03 crc kubenswrapper[4853]: I0127 19:20:03.033209 4853 scope.go:117] "RemoveContainer" containerID="5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea" Jan 27 19:20:03 crc kubenswrapper[4853]: E0127 19:20:03.033451 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea\": container with ID starting with 5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea not found: ID does not exist" containerID="5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea" Jan 27 19:20:03 crc kubenswrapper[4853]: I0127 19:20:03.033475 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea"} err="failed to get container status \"5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea\": rpc error: code = NotFound desc = could not find container \"5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea\": container with ID starting with 5f5df0fc99d4255ccbdfc1a5b1b0e5331528958abbc7c4792a1bb7ba01d641ea not found: ID does not exist" Jan 27 19:20:04 crc kubenswrapper[4853]: I0127 19:20:04.131768 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8573bba-7981-4728-8763-283eb98e9332" path="/var/lib/kubelet/pods/f8573bba-7981-4728-8763-283eb98e9332/volumes" Jan 27 19:20:05 crc kubenswrapper[4853]: I0127 19:20:05.113298 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:20:05 crc kubenswrapper[4853]: E0127 19:20:05.113891 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:20:18 crc kubenswrapper[4853]: I0127 19:20:18.120844 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:20:18 crc kubenswrapper[4853]: E0127 19:20:18.121952 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:20:30 crc kubenswrapper[4853]: I0127 19:20:30.113236 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:20:30 crc kubenswrapper[4853]: E0127 19:20:30.114274 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:20:44 crc kubenswrapper[4853]: I0127 19:20:44.113900 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:20:44 crc kubenswrapper[4853]: E0127 19:20:44.114965 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:20:59 crc kubenswrapper[4853]: I0127 19:20:59.113808 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:20:59 crc kubenswrapper[4853]: E0127 19:20:59.115766 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:21:13 crc kubenswrapper[4853]: I0127 19:21:13.113339 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:21:13 crc kubenswrapper[4853]: E0127 19:21:13.114759 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:21:19 crc kubenswrapper[4853]: I0127 19:21:19.616797 4853 generic.go:334] "Generic (PLEG): container finished" podID="91e90160-3a76-416b-a3e6-cf5d105f892d" containerID="4f46879679eea66c2c2628a5eb5c6f4af745d588373cae0ad4518dea1d2f3791" exitCode=0 Jan 27 19:21:19 crc kubenswrapper[4853]: I0127 19:21:19.616926 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" event={"ID":"91e90160-3a76-416b-a3e6-cf5d105f892d","Type":"ContainerDied","Data":"4f46879679eea66c2c2628a5eb5c6f4af745d588373cae0ad4518dea1d2f3791"} Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.095887 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.211043 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jll49\" (UniqueName: \"kubernetes.io/projected/91e90160-3a76-416b-a3e6-cf5d105f892d-kube-api-access-jll49\") pod \"91e90160-3a76-416b-a3e6-cf5d105f892d\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.211360 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-ssh-key-openstack-edpm-ipam\") pod \"91e90160-3a76-416b-a3e6-cf5d105f892d\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.211438 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-inventory\") pod \"91e90160-3a76-416b-a3e6-cf5d105f892d\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.211503 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-secret-0\") pod \"91e90160-3a76-416b-a3e6-cf5d105f892d\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.211541 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-combined-ca-bundle\") pod \"91e90160-3a76-416b-a3e6-cf5d105f892d\" (UID: \"91e90160-3a76-416b-a3e6-cf5d105f892d\") " Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.218954 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "91e90160-3a76-416b-a3e6-cf5d105f892d" (UID: "91e90160-3a76-416b-a3e6-cf5d105f892d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.219908 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e90160-3a76-416b-a3e6-cf5d105f892d-kube-api-access-jll49" (OuterVolumeSpecName: "kube-api-access-jll49") pod "91e90160-3a76-416b-a3e6-cf5d105f892d" (UID: "91e90160-3a76-416b-a3e6-cf5d105f892d"). InnerVolumeSpecName "kube-api-access-jll49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.242726 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-inventory" (OuterVolumeSpecName: "inventory") pod "91e90160-3a76-416b-a3e6-cf5d105f892d" (UID: "91e90160-3a76-416b-a3e6-cf5d105f892d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.248013 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91e90160-3a76-416b-a3e6-cf5d105f892d" (UID: "91e90160-3a76-416b-a3e6-cf5d105f892d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.261650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "91e90160-3a76-416b-a3e6-cf5d105f892d" (UID: "91e90160-3a76-416b-a3e6-cf5d105f892d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.314881 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.314926 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.314939 4853 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.314948 4853 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e90160-3a76-416b-a3e6-cf5d105f892d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.314957 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jll49\" (UniqueName: \"kubernetes.io/projected/91e90160-3a76-416b-a3e6-cf5d105f892d-kube-api-access-jll49\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.639868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" event={"ID":"91e90160-3a76-416b-a3e6-cf5d105f892d","Type":"ContainerDied","Data":"ebe39af7ef5e6ee3aaf702488b0bb903a8107ed8fbf1118e97380ad8cc7eaf46"} Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.639928 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebe39af7ef5e6ee3aaf702488b0bb903a8107ed8fbf1118e97380ad8cc7eaf46" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.639947 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.747450 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd"] Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748049 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="extract-utilities" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748078 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="extract-utilities" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748098 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748107 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748141 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748151 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748174 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="extract-content" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748183 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="extract-content" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748198 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748206 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748223 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="extract-utilities" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748232 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="extract-utilities" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748249 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="extract-content" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748257 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="extract-content" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748269 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e90160-3a76-416b-a3e6-cf5d105f892d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748277 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e90160-3a76-416b-a3e6-cf5d105f892d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748292 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="extract-content" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748300 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="extract-content" Jan 27 19:21:21 crc kubenswrapper[4853]: E0127 19:21:21.748317 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="extract-utilities" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748326 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="extract-utilities" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748539 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8573bba-7981-4728-8763-283eb98e9332" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748558 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00ebc83-759a-425b-b12a-acc53351bf91" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748579 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e90160-3a76-416b-a3e6-cf5d105f892d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.748594 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d0232d-7675-4e7a-a916-e855f575e899" containerName="registry-server" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.749498 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.755387 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.757588 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.757588 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.757588 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.757693 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.758316 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.759512 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.763532 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd"] Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826385 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826508 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4d5\" (UniqueName: \"kubernetes.io/projected/8b54da38-cda9-486f-bb52-e18ebfa81cc8-kube-api-access-kf4d5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826677 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826713 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.826943 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.827237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.827323 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929224 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929336 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929406 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4d5\" (UniqueName: \"kubernetes.io/projected/8b54da38-cda9-486f-bb52-e18ebfa81cc8-kube-api-access-kf4d5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.929565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.930956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.934723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.935232 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.935257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.936628 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.937023 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.940652 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.942919 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:21 crc kubenswrapper[4853]: I0127 19:21:21.952284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4d5\" (UniqueName: \"kubernetes.io/projected/8b54da38-cda9-486f-bb52-e18ebfa81cc8-kube-api-access-kf4d5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l8ptd\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:22 crc kubenswrapper[4853]: I0127 19:21:22.070991 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:21:22 crc kubenswrapper[4853]: I0127 19:21:22.599325 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd"] Jan 27 19:21:22 crc kubenswrapper[4853]: I0127 19:21:22.607899 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:21:22 crc kubenswrapper[4853]: I0127 19:21:22.652138 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" event={"ID":"8b54da38-cda9-486f-bb52-e18ebfa81cc8","Type":"ContainerStarted","Data":"0eabed703727af808d53e7aeda7b4d10716c668ef7ec0db6658a0d087c308597"} Jan 27 19:21:23 crc kubenswrapper[4853]: I0127 19:21:23.667817 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" event={"ID":"8b54da38-cda9-486f-bb52-e18ebfa81cc8","Type":"ContainerStarted","Data":"30a9a1ea608f91aaa746c190cfb49a7043c85141a2e97af8c2cb33a6eca6d2f0"} Jan 27 19:21:23 crc kubenswrapper[4853]: I0127 19:21:23.695746 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" podStartSLOduration=2.245634528 podStartE2EDuration="2.695723028s" podCreationTimestamp="2026-01-27 19:21:21 +0000 UTC" firstStartedPulling="2026-01-27 19:21:22.607542946 +0000 UTC m=+2325.070085839" lastFinishedPulling="2026-01-27 19:21:23.057631456 +0000 UTC m=+2325.520174339" observedRunningTime="2026-01-27 19:21:23.694726059 +0000 UTC m=+2326.157268982" watchObservedRunningTime="2026-01-27 19:21:23.695723028 +0000 UTC m=+2326.158265911" Jan 27 19:21:24 crc kubenswrapper[4853]: I0127 19:21:24.113722 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:21:24 crc kubenswrapper[4853]: E0127 19:21:24.114030 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:21:38 crc kubenswrapper[4853]: I0127 19:21:38.120904 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:21:38 crc kubenswrapper[4853]: E0127 19:21:38.121969 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:21:49 crc kubenswrapper[4853]: I0127 19:21:49.113261 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:21:49 crc kubenswrapper[4853]: E0127 19:21:49.114266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:22:03 crc kubenswrapper[4853]: I0127 19:22:03.113835 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:22:03 crc kubenswrapper[4853]: E0127 19:22:03.115426 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:22:14 crc kubenswrapper[4853]: I0127 19:22:14.113782 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:22:14 crc kubenswrapper[4853]: E0127 19:22:14.115323 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:22:25 crc kubenswrapper[4853]: I0127 19:22:25.112964 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:22:25 crc kubenswrapper[4853]: E0127 19:22:25.114322 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:22:36 crc kubenswrapper[4853]: I0127 19:22:36.112868 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:22:36 crc kubenswrapper[4853]: E0127 19:22:36.117739 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:22:50 crc kubenswrapper[4853]: I0127 19:22:50.112998 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:22:50 crc kubenswrapper[4853]: E0127 19:22:50.114071 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:23:05 crc kubenswrapper[4853]: I0127 19:23:05.113915 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:23:05 crc kubenswrapper[4853]: E0127 19:23:05.114892 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:23:16 crc kubenswrapper[4853]: I0127 19:23:16.113570 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:23:16 crc kubenswrapper[4853]: E0127 19:23:16.115374 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:23:28 crc kubenswrapper[4853]: I0127 19:23:28.119262 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:23:28 crc kubenswrapper[4853]: E0127 19:23:28.120486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:23:40 crc kubenswrapper[4853]: I0127 19:23:40.112768 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:23:40 crc kubenswrapper[4853]: E0127 19:23:40.113775 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:23:51 crc kubenswrapper[4853]: I0127 19:23:51.518837 4853 generic.go:334] "Generic (PLEG): container finished" podID="8b54da38-cda9-486f-bb52-e18ebfa81cc8" containerID="30a9a1ea608f91aaa746c190cfb49a7043c85141a2e97af8c2cb33a6eca6d2f0" exitCode=0 Jan 27 19:23:51 crc kubenswrapper[4853]: I0127 19:23:51.518936 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" event={"ID":"8b54da38-cda9-486f-bb52-e18ebfa81cc8","Type":"ContainerDied","Data":"30a9a1ea608f91aaa746c190cfb49a7043c85141a2e97af8c2cb33a6eca6d2f0"} Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.944799 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980317 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-0\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980462 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-inventory\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980591 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-extra-config-0\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980616 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf4d5\" (UniqueName: \"kubernetes.io/projected/8b54da38-cda9-486f-bb52-e18ebfa81cc8-kube-api-access-kf4d5\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980652 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-combined-ca-bundle\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980790 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-ssh-key-openstack-edpm-ipam\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980832 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-1\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980854 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-0\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.980907 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-1\") pod \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\" (UID: \"8b54da38-cda9-486f-bb52-e18ebfa81cc8\") " Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.990211 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b54da38-cda9-486f-bb52-e18ebfa81cc8-kube-api-access-kf4d5" (OuterVolumeSpecName: "kube-api-access-kf4d5") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "kube-api-access-kf4d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:23:52 crc kubenswrapper[4853]: I0127 19:23:52.999424 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.015424 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.017974 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-inventory" (OuterVolumeSpecName: "inventory") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.025142 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.030376 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.031100 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.036086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.037342 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8b54da38-cda9-486f-bb52-e18ebfa81cc8" (UID: "8b54da38-cda9-486f-bb52-e18ebfa81cc8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084183 4853 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084237 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084248 4853 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084258 4853 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084268 4853 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084277 4853 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084288 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b54da38-cda9-486f-bb52-e18ebfa81cc8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084303 4853 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b54da38-cda9-486f-bb52-e18ebfa81cc8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.084311 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf4d5\" (UniqueName: \"kubernetes.io/projected/8b54da38-cda9-486f-bb52-e18ebfa81cc8-kube-api-access-kf4d5\") on node \"crc\" DevicePath \"\"" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.539711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" event={"ID":"8b54da38-cda9-486f-bb52-e18ebfa81cc8","Type":"ContainerDied","Data":"0eabed703727af808d53e7aeda7b4d10716c668ef7ec0db6658a0d087c308597"} Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.540214 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eabed703727af808d53e7aeda7b4d10716c668ef7ec0db6658a0d087c308597" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.539787 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l8ptd" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.649380 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797"] Jan 27 19:23:53 crc kubenswrapper[4853]: E0127 19:23:53.649857 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b54da38-cda9-486f-bb52-e18ebfa81cc8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.649879 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b54da38-cda9-486f-bb52-e18ebfa81cc8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.650062 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b54da38-cda9-486f-bb52-e18ebfa81cc8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.650731 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.653740 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.653791 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.655881 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.659658 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wn48z" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.659768 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.663936 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797"] Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.699251 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.699438 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.699501 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.699692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhklh\" (UniqueName: \"kubernetes.io/projected/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-kube-api-access-rhklh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.699775 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.699893 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.700032 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802251 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhklh\" (UniqueName: \"kubernetes.io/projected/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-kube-api-access-rhklh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.802401 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.807922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.808253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.808333 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.808627 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.812221 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.812833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.819289 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhklh\" (UniqueName: \"kubernetes.io/projected/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-kube-api-access-rhklh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2v797\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:53 crc kubenswrapper[4853]: I0127 19:23:53.967762 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:23:54 crc kubenswrapper[4853]: I0127 19:23:54.541176 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797"] Jan 27 19:23:55 crc kubenswrapper[4853]: I0127 19:23:55.112688 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:23:55 crc kubenswrapper[4853]: E0127 19:23:55.113062 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:23:55 crc kubenswrapper[4853]: I0127 19:23:55.561792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" event={"ID":"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82","Type":"ContainerStarted","Data":"3e98d320cd99a712d143f888ccfff91650e020c8d67b86a978531504118b4fb3"} Jan 27 19:23:56 crc kubenswrapper[4853]: I0127 19:23:56.578969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" event={"ID":"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82","Type":"ContainerStarted","Data":"06a0bad020daf7bbb655a2b6f46165a66b2cc42f10cc29e91ef7ab98b45e1ddc"} Jan 27 19:23:56 crc kubenswrapper[4853]: I0127 19:23:56.612028 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" podStartSLOduration=2.895369489 podStartE2EDuration="3.611999369s" podCreationTimestamp="2026-01-27 19:23:53 +0000 UTC" firstStartedPulling="2026-01-27 19:23:54.560917945 +0000 UTC m=+2477.023460828" lastFinishedPulling="2026-01-27 19:23:55.277547825 +0000 UTC m=+2477.740090708" observedRunningTime="2026-01-27 19:23:56.601666625 +0000 UTC m=+2479.064209508" watchObservedRunningTime="2026-01-27 19:23:56.611999369 +0000 UTC m=+2479.074542252" Jan 27 19:24:07 crc kubenswrapper[4853]: I0127 19:24:07.112876 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:24:07 crc kubenswrapper[4853]: E0127 19:24:07.113859 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:24:19 crc kubenswrapper[4853]: I0127 19:24:19.113366 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:24:19 crc kubenswrapper[4853]: E0127 19:24:19.114676 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:24:31 crc kubenswrapper[4853]: I0127 19:24:31.837529 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqzmr"] Jan 27 19:24:31 crc kubenswrapper[4853]: I0127 19:24:31.842649 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:31 crc kubenswrapper[4853]: I0127 19:24:31.855637 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqzmr"] Jan 27 19:24:31 crc kubenswrapper[4853]: I0127 19:24:31.954029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-utilities\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:31 crc kubenswrapper[4853]: I0127 19:24:31.954075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddxx\" (UniqueName: \"kubernetes.io/projected/fc0b4299-a5f0-4173-9865-211e29350aed-kube-api-access-qddxx\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:31 crc kubenswrapper[4853]: I0127 19:24:31.954431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-catalog-content\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.057375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-catalog-content\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.057587 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-utilities\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.057620 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddxx\" (UniqueName: \"kubernetes.io/projected/fc0b4299-a5f0-4173-9865-211e29350aed-kube-api-access-qddxx\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.057875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-catalog-content\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.058162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-utilities\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.085150 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddxx\" (UniqueName: \"kubernetes.io/projected/fc0b4299-a5f0-4173-9865-211e29350aed-kube-api-access-qddxx\") pod \"redhat-operators-cqzmr\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.112831 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:24:32 crc kubenswrapper[4853]: E0127 19:24:32.113207 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.160263 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.653271 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqzmr"] Jan 27 19:24:32 crc kubenswrapper[4853]: I0127 19:24:32.909066 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqzmr" event={"ID":"fc0b4299-a5f0-4173-9865-211e29350aed","Type":"ContainerStarted","Data":"f8c68bd925e1822f74844116323d00a70c3252735af179ad3aa13bb3351aeb5a"} Jan 27 19:24:33 crc kubenswrapper[4853]: I0127 19:24:33.919181 4853 generic.go:334] "Generic (PLEG): container finished" podID="fc0b4299-a5f0-4173-9865-211e29350aed" containerID="62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6" exitCode=0 Jan 27 19:24:33 crc kubenswrapper[4853]: I0127 19:24:33.919299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqzmr" event={"ID":"fc0b4299-a5f0-4173-9865-211e29350aed","Type":"ContainerDied","Data":"62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6"} Jan 27 19:24:35 crc kubenswrapper[4853]: E0127 19:24:35.689400 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc0b4299_a5f0_4173_9865_211e29350aed.slice/crio-conmon-8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:24:35 crc kubenswrapper[4853]: I0127 19:24:35.958912 4853 generic.go:334] "Generic (PLEG): container finished" podID="fc0b4299-a5f0-4173-9865-211e29350aed" containerID="8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9" exitCode=0 Jan 27 19:24:35 crc kubenswrapper[4853]: I0127 19:24:35.959079 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqzmr" event={"ID":"fc0b4299-a5f0-4173-9865-211e29350aed","Type":"ContainerDied","Data":"8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9"} Jan 27 19:24:36 crc kubenswrapper[4853]: I0127 19:24:36.975806 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqzmr" event={"ID":"fc0b4299-a5f0-4173-9865-211e29350aed","Type":"ContainerStarted","Data":"62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72"} Jan 27 19:24:36 crc kubenswrapper[4853]: I0127 19:24:36.999466 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cqzmr" podStartSLOduration=3.401768083 podStartE2EDuration="5.999443774s" podCreationTimestamp="2026-01-27 19:24:31 +0000 UTC" firstStartedPulling="2026-01-27 19:24:33.921724242 +0000 UTC m=+2516.384267125" lastFinishedPulling="2026-01-27 19:24:36.519399943 +0000 UTC m=+2518.981942816" observedRunningTime="2026-01-27 19:24:36.994885325 +0000 UTC m=+2519.457428218" watchObservedRunningTime="2026-01-27 19:24:36.999443774 +0000 UTC m=+2519.461986657" Jan 27 19:24:42 crc kubenswrapper[4853]: I0127 19:24:42.160424 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:42 crc kubenswrapper[4853]: I0127 19:24:42.161339 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:43 crc kubenswrapper[4853]: I0127 19:24:43.217098 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cqzmr" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="registry-server" probeResult="failure" output=< Jan 27 19:24:43 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 19:24:43 crc kubenswrapper[4853]: > Jan 27 19:24:45 crc kubenswrapper[4853]: I0127 19:24:45.113464 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:24:46 crc kubenswrapper[4853]: I0127 19:24:46.057859 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"551690f0d5262de4770162d1a6ba120739293508bda19b0e91c2ce59875386da"} Jan 27 19:24:52 crc kubenswrapper[4853]: I0127 19:24:52.209421 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:52 crc kubenswrapper[4853]: I0127 19:24:52.296432 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:52 crc kubenswrapper[4853]: I0127 19:24:52.452650 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqzmr"] Jan 27 19:24:54 crc kubenswrapper[4853]: I0127 19:24:54.134819 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cqzmr" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="registry-server" containerID="cri-o://62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72" gracePeriod=2 Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.142729 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.151602 4853 generic.go:334] "Generic (PLEG): container finished" podID="fc0b4299-a5f0-4173-9865-211e29350aed" containerID="62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72" exitCode=0 Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.151651 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqzmr" event={"ID":"fc0b4299-a5f0-4173-9865-211e29350aed","Type":"ContainerDied","Data":"62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72"} Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.151683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqzmr" event={"ID":"fc0b4299-a5f0-4173-9865-211e29350aed","Type":"ContainerDied","Data":"f8c68bd925e1822f74844116323d00a70c3252735af179ad3aa13bb3351aeb5a"} Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.151705 4853 scope.go:117] "RemoveContainer" containerID="62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.151766 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqzmr" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.192893 4853 scope.go:117] "RemoveContainer" containerID="8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.219730 4853 scope.go:117] "RemoveContainer" containerID="62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.225127 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-utilities\") pod \"fc0b4299-a5f0-4173-9865-211e29350aed\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.225209 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddxx\" (UniqueName: \"kubernetes.io/projected/fc0b4299-a5f0-4173-9865-211e29350aed-kube-api-access-qddxx\") pod \"fc0b4299-a5f0-4173-9865-211e29350aed\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.225301 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-catalog-content\") pod \"fc0b4299-a5f0-4173-9865-211e29350aed\" (UID: \"fc0b4299-a5f0-4173-9865-211e29350aed\") " Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.226220 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-utilities" (OuterVolumeSpecName: "utilities") pod "fc0b4299-a5f0-4173-9865-211e29350aed" (UID: "fc0b4299-a5f0-4173-9865-211e29350aed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.233993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc0b4299-a5f0-4173-9865-211e29350aed-kube-api-access-qddxx" (OuterVolumeSpecName: "kube-api-access-qddxx") pod "fc0b4299-a5f0-4173-9865-211e29350aed" (UID: "fc0b4299-a5f0-4173-9865-211e29350aed"). InnerVolumeSpecName "kube-api-access-qddxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.308483 4853 scope.go:117] "RemoveContainer" containerID="62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72" Jan 27 19:24:55 crc kubenswrapper[4853]: E0127 19:24:55.309242 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72\": container with ID starting with 62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72 not found: ID does not exist" containerID="62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.309312 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72"} err="failed to get container status \"62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72\": rpc error: code = NotFound desc = could not find container \"62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72\": container with ID starting with 62a1e74a10251f234fabd18ce131fe7994bdbe3816e8ff665ee5839240eb6d72 not found: ID does not exist" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.309365 4853 scope.go:117] "RemoveContainer" containerID="8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9" Jan 27 19:24:55 crc kubenswrapper[4853]: E0127 19:24:55.309928 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9\": container with ID starting with 8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9 not found: ID does not exist" containerID="8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.310047 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9"} err="failed to get container status \"8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9\": rpc error: code = NotFound desc = could not find container \"8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9\": container with ID starting with 8251f5cf3f895b0dc3da7df2e08df7d4d487d6d418956c4c714bf2754adf9cf9 not found: ID does not exist" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.310151 4853 scope.go:117] "RemoveContainer" containerID="62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6" Jan 27 19:24:55 crc kubenswrapper[4853]: E0127 19:24:55.310531 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6\": container with ID starting with 62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6 not found: ID does not exist" containerID="62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.310613 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6"} err="failed to get container status \"62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6\": rpc error: code = NotFound desc = could not find container \"62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6\": container with ID starting with 62b76c4a70f5b79e233ba01dd68860d099b10321a90691559a7aca023bb818b6 not found: ID does not exist" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.328498 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qddxx\" (UniqueName: \"kubernetes.io/projected/fc0b4299-a5f0-4173-9865-211e29350aed-kube-api-access-qddxx\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.328823 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.354196 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc0b4299-a5f0-4173-9865-211e29350aed" (UID: "fc0b4299-a5f0-4173-9865-211e29350aed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.430948 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0b4299-a5f0-4173-9865-211e29350aed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.491931 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqzmr"] Jan 27 19:24:55 crc kubenswrapper[4853]: I0127 19:24:55.499821 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqzmr"] Jan 27 19:24:56 crc kubenswrapper[4853]: I0127 19:24:56.124358 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" path="/var/lib/kubelet/pods/fc0b4299-a5f0-4173-9865-211e29350aed/volumes" Jan 27 19:26:42 crc kubenswrapper[4853]: I0127 19:26:42.171191 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" containerID="06a0bad020daf7bbb655a2b6f46165a66b2cc42f10cc29e91ef7ab98b45e1ddc" exitCode=0 Jan 27 19:26:42 crc kubenswrapper[4853]: I0127 19:26:42.171269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" event={"ID":"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82","Type":"ContainerDied","Data":"06a0bad020daf7bbb655a2b6f46165a66b2cc42f10cc29e91ef7ab98b45e1ddc"} Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.588155 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683406 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-inventory\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683466 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-0\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683520 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ssh-key-openstack-edpm-ipam\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683600 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhklh\" (UniqueName: \"kubernetes.io/projected/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-kube-api-access-rhklh\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-telemetry-combined-ca-bundle\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-1\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.683830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-2\") pod \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\" (UID: \"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82\") " Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.690535 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-kube-api-access-rhklh" (OuterVolumeSpecName: "kube-api-access-rhklh") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "kube-api-access-rhklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.695272 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.713511 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.713586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-inventory" (OuterVolumeSpecName: "inventory") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.715803 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.715569 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.722070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" (UID: "7f436e8d-9923-47a6-ab8c-ee0c8e3bde82"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.787679 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.788146 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.788275 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.789047 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhklh\" (UniqueName: \"kubernetes.io/projected/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-kube-api-access-rhklh\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.789162 4853 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.789250 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:43 crc kubenswrapper[4853]: I0127 19:26:43.789370 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f436e8d-9923-47a6-ab8c-ee0c8e3bde82-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 19:26:44 crc kubenswrapper[4853]: I0127 19:26:44.190639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" event={"ID":"7f436e8d-9923-47a6-ab8c-ee0c8e3bde82","Type":"ContainerDied","Data":"3e98d320cd99a712d143f888ccfff91650e020c8d67b86a978531504118b4fb3"} Jan 27 19:26:44 crc kubenswrapper[4853]: I0127 19:26:44.191292 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e98d320cd99a712d143f888ccfff91650e020c8d67b86a978531504118b4fb3" Jan 27 19:26:44 crc kubenswrapper[4853]: I0127 19:26:44.190723 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2v797" Jan 27 19:27:05 crc kubenswrapper[4853]: I0127 19:27:05.540945 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:27:05 crc kubenswrapper[4853]: I0127 19:27:05.541715 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:35 crc kubenswrapper[4853]: I0127 19:27:35.542233 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:27:35 crc kubenswrapper[4853]: I0127 19:27:35.543008 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.940882 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:27:49 crc kubenswrapper[4853]: E0127 19:27:49.942600 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="registry-server" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.942623 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="registry-server" Jan 27 19:27:49 crc kubenswrapper[4853]: E0127 19:27:49.942644 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.942657 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 19:27:49 crc kubenswrapper[4853]: E0127 19:27:49.942670 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="extract-utilities" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.942679 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="extract-utilities" Jan 27 19:27:49 crc kubenswrapper[4853]: E0127 19:27:49.942690 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="extract-content" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.942698 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="extract-content" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.942964 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc0b4299-a5f0-4173-9865-211e29350aed" containerName="registry-server" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.942987 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f436e8d-9923-47a6-ab8c-ee0c8e3bde82" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.944001 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.946299 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.946544 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.947854 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-h7m2w" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.947889 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 19:27:49 crc kubenswrapper[4853]: I0127 19:27:49.952140 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.038750 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztd7q\" (UniqueName: \"kubernetes.io/projected/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-kube-api-access-ztd7q\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.038848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.038896 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.038938 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.039048 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.039083 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.039214 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-config-data\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.039255 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.039294 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.140855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.140902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.140966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-config-data\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141014 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141096 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztd7q\" (UniqueName: \"kubernetes.io/projected/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-kube-api-access-ztd7q\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141144 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141165 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141444 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.141657 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.142532 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.142787 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-config-data\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.149859 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.149916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.150761 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.161974 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztd7q\" (UniqueName: \"kubernetes.io/projected/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-kube-api-access-ztd7q\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.172427 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.281056 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.721029 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.738156 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:27:50 crc kubenswrapper[4853]: I0127 19:27:50.848840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca","Type":"ContainerStarted","Data":"e5730f9c2f89bc08094692e0132ae5233228cbd99aabbdd9be0a06fef21d5807"} Jan 27 19:28:05 crc kubenswrapper[4853]: I0127 19:28:05.541035 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:28:05 crc kubenswrapper[4853]: I0127 19:28:05.541752 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:28:05 crc kubenswrapper[4853]: I0127 19:28:05.541805 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:28:05 crc kubenswrapper[4853]: I0127 19:28:05.542685 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"551690f0d5262de4770162d1a6ba120739293508bda19b0e91c2ce59875386da"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:28:05 crc kubenswrapper[4853]: I0127 19:28:05.542752 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://551690f0d5262de4770162d1a6ba120739293508bda19b0e91c2ce59875386da" gracePeriod=600 Jan 27 19:28:07 crc kubenswrapper[4853]: I0127 19:28:07.044200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"551690f0d5262de4770162d1a6ba120739293508bda19b0e91c2ce59875386da"} Jan 27 19:28:07 crc kubenswrapper[4853]: I0127 19:28:07.044145 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="551690f0d5262de4770162d1a6ba120739293508bda19b0e91c2ce59875386da" exitCode=0 Jan 27 19:28:07 crc kubenswrapper[4853]: I0127 19:28:07.044854 4853 scope.go:117] "RemoveContainer" containerID="0cb3bd4a1cb8b018f21dc73938efa92b8095f4623789c8832e2d36d3293fa158" Jan 27 19:28:35 crc kubenswrapper[4853]: E0127 19:28:35.985214 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 19:28:35 crc kubenswrapper[4853]: E0127 19:28:35.986053 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztd7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6275c0bd-3255-4c3d-88bc-30f5d1ee27ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:28:35 crc kubenswrapper[4853]: E0127 19:28:35.987256 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" Jan 27 19:28:36 crc kubenswrapper[4853]: I0127 19:28:36.360588 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938"} Jan 27 19:28:36 crc kubenswrapper[4853]: E0127 19:28:36.363303 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" Jan 27 19:28:49 crc kubenswrapper[4853]: I0127 19:28:49.653593 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 19:28:51 crc kubenswrapper[4853]: I0127 19:28:51.518393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca","Type":"ContainerStarted","Data":"ace899c42d69736fd5bb5e1192e36a46aa89508f24b10b5a764b05658cc74463"} Jan 27 19:28:51 crc kubenswrapper[4853]: I0127 19:28:51.544406 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.632364964 podStartE2EDuration="1m3.544383655s" podCreationTimestamp="2026-01-27 19:27:48 +0000 UTC" firstStartedPulling="2026-01-27 19:27:50.737871765 +0000 UTC m=+2713.200414648" lastFinishedPulling="2026-01-27 19:28:49.649890456 +0000 UTC m=+2772.112433339" observedRunningTime="2026-01-27 19:28:51.540247167 +0000 UTC m=+2774.002790050" watchObservedRunningTime="2026-01-27 19:28:51.544383655 +0000 UTC m=+2774.006926538" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.158764 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48"] Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.161111 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.163573 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.163838 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.171874 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48"] Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.326829 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq82t\" (UniqueName: \"kubernetes.io/projected/d429d022-5145-439e-9cf6-f56be91d06ca-kube-api-access-nq82t\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.326914 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d429d022-5145-439e-9cf6-f56be91d06ca-config-volume\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.326957 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d429d022-5145-439e-9cf6-f56be91d06ca-secret-volume\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.428926 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq82t\" (UniqueName: \"kubernetes.io/projected/d429d022-5145-439e-9cf6-f56be91d06ca-kube-api-access-nq82t\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.429013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d429d022-5145-439e-9cf6-f56be91d06ca-config-volume\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.429039 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d429d022-5145-439e-9cf6-f56be91d06ca-secret-volume\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.430324 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d429d022-5145-439e-9cf6-f56be91d06ca-config-volume\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.440150 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d429d022-5145-439e-9cf6-f56be91d06ca-secret-volume\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.446539 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq82t\" (UniqueName: \"kubernetes.io/projected/d429d022-5145-439e-9cf6-f56be91d06ca-kube-api-access-nq82t\") pod \"collect-profiles-29492370-zzz48\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.505753 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:00 crc kubenswrapper[4853]: I0127 19:30:00.986465 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48"] Jan 27 19:30:01 crc kubenswrapper[4853]: I0127 19:30:01.169434 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" event={"ID":"d429d022-5145-439e-9cf6-f56be91d06ca","Type":"ContainerStarted","Data":"a1229aeb627680c802269fcb416c8d34f863d1f73fe5b911ba65d159722647d4"} Jan 27 19:30:02 crc kubenswrapper[4853]: I0127 19:30:02.204411 4853 generic.go:334] "Generic (PLEG): container finished" podID="d429d022-5145-439e-9cf6-f56be91d06ca" containerID="d69995c95e671f868803b5e1984c58695764bcdedad76745059832a2009cd7c8" exitCode=0 Jan 27 19:30:02 crc kubenswrapper[4853]: I0127 19:30:02.204732 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" event={"ID":"d429d022-5145-439e-9cf6-f56be91d06ca","Type":"ContainerDied","Data":"d69995c95e671f868803b5e1984c58695764bcdedad76745059832a2009cd7c8"} Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.607430 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.704961 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq82t\" (UniqueName: \"kubernetes.io/projected/d429d022-5145-439e-9cf6-f56be91d06ca-kube-api-access-nq82t\") pod \"d429d022-5145-439e-9cf6-f56be91d06ca\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.705243 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d429d022-5145-439e-9cf6-f56be91d06ca-secret-volume\") pod \"d429d022-5145-439e-9cf6-f56be91d06ca\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.705395 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d429d022-5145-439e-9cf6-f56be91d06ca-config-volume\") pod \"d429d022-5145-439e-9cf6-f56be91d06ca\" (UID: \"d429d022-5145-439e-9cf6-f56be91d06ca\") " Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.706387 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d429d022-5145-439e-9cf6-f56be91d06ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "d429d022-5145-439e-9cf6-f56be91d06ca" (UID: "d429d022-5145-439e-9cf6-f56be91d06ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.711939 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d429d022-5145-439e-9cf6-f56be91d06ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d429d022-5145-439e-9cf6-f56be91d06ca" (UID: "d429d022-5145-439e-9cf6-f56be91d06ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.712257 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d429d022-5145-439e-9cf6-f56be91d06ca-kube-api-access-nq82t" (OuterVolumeSpecName: "kube-api-access-nq82t") pod "d429d022-5145-439e-9cf6-f56be91d06ca" (UID: "d429d022-5145-439e-9cf6-f56be91d06ca"). InnerVolumeSpecName "kube-api-access-nq82t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.807699 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d429d022-5145-439e-9cf6-f56be91d06ca-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.808167 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq82t\" (UniqueName: \"kubernetes.io/projected/d429d022-5145-439e-9cf6-f56be91d06ca-kube-api-access-nq82t\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:03 crc kubenswrapper[4853]: I0127 19:30:03.808184 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d429d022-5145-439e-9cf6-f56be91d06ca-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:04 crc kubenswrapper[4853]: I0127 19:30:04.222863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" event={"ID":"d429d022-5145-439e-9cf6-f56be91d06ca","Type":"ContainerDied","Data":"a1229aeb627680c802269fcb416c8d34f863d1f73fe5b911ba65d159722647d4"} Jan 27 19:30:04 crc kubenswrapper[4853]: I0127 19:30:04.222921 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1229aeb627680c802269fcb416c8d34f863d1f73fe5b911ba65d159722647d4" Jan 27 19:30:04 crc kubenswrapper[4853]: I0127 19:30:04.222942 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-zzz48" Jan 27 19:30:04 crc kubenswrapper[4853]: I0127 19:30:04.686580 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk"] Jan 27 19:30:04 crc kubenswrapper[4853]: I0127 19:30:04.694170 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-nbfpk"] Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.053907 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6ssq"] Jan 27 19:30:05 crc kubenswrapper[4853]: E0127 19:30:05.055036 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d429d022-5145-439e-9cf6-f56be91d06ca" containerName="collect-profiles" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.055268 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d429d022-5145-439e-9cf6-f56be91d06ca" containerName="collect-profiles" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.055491 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d429d022-5145-439e-9cf6-f56be91d06ca" containerName="collect-profiles" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.057577 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.063805 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6ssq"] Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.136904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-catalog-content\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.136977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvcx\" (UniqueName: \"kubernetes.io/projected/b1947960-9106-4a47-aae4-f9d334cbca18-kube-api-access-vgvcx\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.137021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-utilities\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.239692 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-catalog-content\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.240176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvcx\" (UniqueName: \"kubernetes.io/projected/b1947960-9106-4a47-aae4-f9d334cbca18-kube-api-access-vgvcx\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.240362 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-catalog-content\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.240344 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-utilities\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.240647 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-utilities\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.258430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvcx\" (UniqueName: \"kubernetes.io/projected/b1947960-9106-4a47-aae4-f9d334cbca18-kube-api-access-vgvcx\") pod \"redhat-marketplace-g6ssq\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.374736 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:05 crc kubenswrapper[4853]: I0127 19:30:05.871288 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6ssq"] Jan 27 19:30:05 crc kubenswrapper[4853]: W0127 19:30:05.871659 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1947960_9106_4a47_aae4_f9d334cbca18.slice/crio-06177d20e721002187a84d557712b8795996ff0884973c3ba1a6bc6c121656fd WatchSource:0}: Error finding container 06177d20e721002187a84d557712b8795996ff0884973c3ba1a6bc6c121656fd: Status 404 returned error can't find the container with id 06177d20e721002187a84d557712b8795996ff0884973c3ba1a6bc6c121656fd Jan 27 19:30:06 crc kubenswrapper[4853]: I0127 19:30:06.130986 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41434dfd-3fc3-4184-a911-506620889ebe" path="/var/lib/kubelet/pods/41434dfd-3fc3-4184-a911-506620889ebe/volumes" Jan 27 19:30:06 crc kubenswrapper[4853]: I0127 19:30:06.243567 4853 generic.go:334] "Generic (PLEG): container finished" podID="b1947960-9106-4a47-aae4-f9d334cbca18" containerID="bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9" exitCode=0 Jan 27 19:30:06 crc kubenswrapper[4853]: I0127 19:30:06.243643 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6ssq" event={"ID":"b1947960-9106-4a47-aae4-f9d334cbca18","Type":"ContainerDied","Data":"bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9"} Jan 27 19:30:06 crc kubenswrapper[4853]: I0127 19:30:06.243696 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6ssq" event={"ID":"b1947960-9106-4a47-aae4-f9d334cbca18","Type":"ContainerStarted","Data":"06177d20e721002187a84d557712b8795996ff0884973c3ba1a6bc6c121656fd"} Jan 27 19:30:08 crc kubenswrapper[4853]: I0127 19:30:08.276858 4853 generic.go:334] "Generic (PLEG): container finished" podID="b1947960-9106-4a47-aae4-f9d334cbca18" containerID="1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351" exitCode=0 Jan 27 19:30:08 crc kubenswrapper[4853]: I0127 19:30:08.277284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6ssq" event={"ID":"b1947960-9106-4a47-aae4-f9d334cbca18","Type":"ContainerDied","Data":"1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351"} Jan 27 19:30:10 crc kubenswrapper[4853]: I0127 19:30:10.297226 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6ssq" event={"ID":"b1947960-9106-4a47-aae4-f9d334cbca18","Type":"ContainerStarted","Data":"83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839"} Jan 27 19:30:10 crc kubenswrapper[4853]: I0127 19:30:10.329276 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6ssq" podStartSLOduration=1.931319579 podStartE2EDuration="5.329243059s" podCreationTimestamp="2026-01-27 19:30:05 +0000 UTC" firstStartedPulling="2026-01-27 19:30:06.245505018 +0000 UTC m=+2848.708047901" lastFinishedPulling="2026-01-27 19:30:09.643428498 +0000 UTC m=+2852.105971381" observedRunningTime="2026-01-27 19:30:10.321920571 +0000 UTC m=+2852.784463474" watchObservedRunningTime="2026-01-27 19:30:10.329243059 +0000 UTC m=+2852.791785952" Jan 27 19:30:15 crc kubenswrapper[4853]: I0127 19:30:15.375244 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:15 crc kubenswrapper[4853]: I0127 19:30:15.375993 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:15 crc kubenswrapper[4853]: I0127 19:30:15.435556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:16 crc kubenswrapper[4853]: I0127 19:30:16.409179 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:16 crc kubenswrapper[4853]: I0127 19:30:16.478922 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6ssq"] Jan 27 19:30:18 crc kubenswrapper[4853]: I0127 19:30:18.377429 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6ssq" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="registry-server" containerID="cri-o://83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839" gracePeriod=2 Jan 27 19:30:18 crc kubenswrapper[4853]: I0127 19:30:18.841864 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.011523 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-catalog-content\") pod \"b1947960-9106-4a47-aae4-f9d334cbca18\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.011595 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgvcx\" (UniqueName: \"kubernetes.io/projected/b1947960-9106-4a47-aae4-f9d334cbca18-kube-api-access-vgvcx\") pod \"b1947960-9106-4a47-aae4-f9d334cbca18\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.011676 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-utilities\") pod \"b1947960-9106-4a47-aae4-f9d334cbca18\" (UID: \"b1947960-9106-4a47-aae4-f9d334cbca18\") " Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.012560 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-utilities" (OuterVolumeSpecName: "utilities") pod "b1947960-9106-4a47-aae4-f9d334cbca18" (UID: "b1947960-9106-4a47-aae4-f9d334cbca18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.017529 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1947960-9106-4a47-aae4-f9d334cbca18-kube-api-access-vgvcx" (OuterVolumeSpecName: "kube-api-access-vgvcx") pod "b1947960-9106-4a47-aae4-f9d334cbca18" (UID: "b1947960-9106-4a47-aae4-f9d334cbca18"). InnerVolumeSpecName "kube-api-access-vgvcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.038322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1947960-9106-4a47-aae4-f9d334cbca18" (UID: "b1947960-9106-4a47-aae4-f9d334cbca18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.113914 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.113955 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgvcx\" (UniqueName: \"kubernetes.io/projected/b1947960-9106-4a47-aae4-f9d334cbca18-kube-api-access-vgvcx\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.113967 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1947960-9106-4a47-aae4-f9d334cbca18-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.390026 4853 generic.go:334] "Generic (PLEG): container finished" podID="b1947960-9106-4a47-aae4-f9d334cbca18" containerID="83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839" exitCode=0 Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.390085 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6ssq" event={"ID":"b1947960-9106-4a47-aae4-f9d334cbca18","Type":"ContainerDied","Data":"83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839"} Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.390098 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6ssq" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.390139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6ssq" event={"ID":"b1947960-9106-4a47-aae4-f9d334cbca18","Type":"ContainerDied","Data":"06177d20e721002187a84d557712b8795996ff0884973c3ba1a6bc6c121656fd"} Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.390169 4853 scope.go:117] "RemoveContainer" containerID="83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.411854 4853 scope.go:117] "RemoveContainer" containerID="1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.422903 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6ssq"] Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.433681 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6ssq"] Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.440138 4853 scope.go:117] "RemoveContainer" containerID="bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.480276 4853 scope.go:117] "RemoveContainer" containerID="83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839" Jan 27 19:30:19 crc kubenswrapper[4853]: E0127 19:30:19.480902 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839\": container with ID starting with 83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839 not found: ID does not exist" containerID="83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.480966 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839"} err="failed to get container status \"83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839\": rpc error: code = NotFound desc = could not find container \"83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839\": container with ID starting with 83d7a9f1ee680888baa46509e313776adc8a1f1db226ca308fd42f7f97b8a839 not found: ID does not exist" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.481006 4853 scope.go:117] "RemoveContainer" containerID="1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351" Jan 27 19:30:19 crc kubenswrapper[4853]: E0127 19:30:19.481642 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351\": container with ID starting with 1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351 not found: ID does not exist" containerID="1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.481681 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351"} err="failed to get container status \"1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351\": rpc error: code = NotFound desc = could not find container \"1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351\": container with ID starting with 1f677fd276f3de506e80a4799663749816f15339d3a9370ec88bcc1e21a7b351 not found: ID does not exist" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.481721 4853 scope.go:117] "RemoveContainer" containerID="bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9" Jan 27 19:30:19 crc kubenswrapper[4853]: E0127 19:30:19.481961 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9\": container with ID starting with bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9 not found: ID does not exist" containerID="bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9" Jan 27 19:30:19 crc kubenswrapper[4853]: I0127 19:30:19.482020 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9"} err="failed to get container status \"bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9\": rpc error: code = NotFound desc = could not find container \"bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9\": container with ID starting with bf8dea672b15a7d32228d36a80d42cfee483bee3a7272837e68383c048560ad9 not found: ID does not exist" Jan 27 19:30:20 crc kubenswrapper[4853]: I0127 19:30:20.123593 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" path="/var/lib/kubelet/pods/b1947960-9106-4a47-aae4-f9d334cbca18/volumes" Jan 27 19:30:23 crc kubenswrapper[4853]: I0127 19:30:23.358143 4853 scope.go:117] "RemoveContainer" containerID="08c14654c73a75d12afb5bd475fed280c17c7f14c57b1dc11239ecb9bd29aad6" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.541547 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.542549 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.870981 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cxd2w"] Jan 27 19:30:35 crc kubenswrapper[4853]: E0127 19:30:35.872563 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="extract-utilities" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.872681 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="extract-utilities" Jan 27 19:30:35 crc kubenswrapper[4853]: E0127 19:30:35.872759 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="registry-server" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.872831 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="registry-server" Jan 27 19:30:35 crc kubenswrapper[4853]: E0127 19:30:35.872989 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="extract-content" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.873073 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="extract-content" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.873451 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1947960-9106-4a47-aae4-f9d334cbca18" containerName="registry-server" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.875595 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.894967 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxd2w"] Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.930970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-utilities\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.931038 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-catalog-content\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:35 crc kubenswrapper[4853]: I0127 19:30:35.931059 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfg7h\" (UniqueName: \"kubernetes.io/projected/5f8cc821-2d69-4d29-8295-c35c2ca431e7-kube-api-access-rfg7h\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.032481 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-utilities\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.032543 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-catalog-content\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.032566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfg7h\" (UniqueName: \"kubernetes.io/projected/5f8cc821-2d69-4d29-8295-c35c2ca431e7-kube-api-access-rfg7h\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.033048 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-utilities\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.033400 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-catalog-content\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.054001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfg7h\" (UniqueName: \"kubernetes.io/projected/5f8cc821-2d69-4d29-8295-c35c2ca431e7-kube-api-access-rfg7h\") pod \"community-operators-cxd2w\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.261309 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:36 crc kubenswrapper[4853]: I0127 19:30:36.793330 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxd2w"] Jan 27 19:30:37 crc kubenswrapper[4853]: I0127 19:30:37.586473 4853 generic.go:334] "Generic (PLEG): container finished" podID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerID="0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc" exitCode=0 Jan 27 19:30:37 crc kubenswrapper[4853]: I0127 19:30:37.586546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerDied","Data":"0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc"} Jan 27 19:30:37 crc kubenswrapper[4853]: I0127 19:30:37.587113 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerStarted","Data":"a6f582f9bfc255483e2b0fafc3df3b3b148b33a4a72b82afd977bfdce6f8f78f"} Jan 27 19:30:38 crc kubenswrapper[4853]: I0127 19:30:38.600057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerStarted","Data":"d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c"} Jan 27 19:30:39 crc kubenswrapper[4853]: I0127 19:30:39.613792 4853 generic.go:334] "Generic (PLEG): container finished" podID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerID="d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c" exitCode=0 Jan 27 19:30:39 crc kubenswrapper[4853]: I0127 19:30:39.613910 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerDied","Data":"d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c"} Jan 27 19:30:40 crc kubenswrapper[4853]: I0127 19:30:40.627606 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerStarted","Data":"34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b"} Jan 27 19:30:40 crc kubenswrapper[4853]: I0127 19:30:40.649843 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cxd2w" podStartSLOduration=2.989275278 podStartE2EDuration="5.649820026s" podCreationTimestamp="2026-01-27 19:30:35 +0000 UTC" firstStartedPulling="2026-01-27 19:30:37.589514522 +0000 UTC m=+2880.052057395" lastFinishedPulling="2026-01-27 19:30:40.25005926 +0000 UTC m=+2882.712602143" observedRunningTime="2026-01-27 19:30:40.645579376 +0000 UTC m=+2883.108122279" watchObservedRunningTime="2026-01-27 19:30:40.649820026 +0000 UTC m=+2883.112362909" Jan 27 19:30:46 crc kubenswrapper[4853]: I0127 19:30:46.261876 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:46 crc kubenswrapper[4853]: I0127 19:30:46.263641 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:46 crc kubenswrapper[4853]: I0127 19:30:46.321462 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:46 crc kubenswrapper[4853]: I0127 19:30:46.735198 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:46 crc kubenswrapper[4853]: I0127 19:30:46.789270 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxd2w"] Jan 27 19:30:48 crc kubenswrapper[4853]: I0127 19:30:48.699979 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cxd2w" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="registry-server" containerID="cri-o://34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b" gracePeriod=2 Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.294789 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.437319 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfg7h\" (UniqueName: \"kubernetes.io/projected/5f8cc821-2d69-4d29-8295-c35c2ca431e7-kube-api-access-rfg7h\") pod \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.437495 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-utilities\") pod \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.437557 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-catalog-content\") pod \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\" (UID: \"5f8cc821-2d69-4d29-8295-c35c2ca431e7\") " Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.449360 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-utilities" (OuterVolumeSpecName: "utilities") pod "5f8cc821-2d69-4d29-8295-c35c2ca431e7" (UID: "5f8cc821-2d69-4d29-8295-c35c2ca431e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.479582 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8cc821-2d69-4d29-8295-c35c2ca431e7-kube-api-access-rfg7h" (OuterVolumeSpecName: "kube-api-access-rfg7h") pod "5f8cc821-2d69-4d29-8295-c35c2ca431e7" (UID: "5f8cc821-2d69-4d29-8295-c35c2ca431e7"). InnerVolumeSpecName "kube-api-access-rfg7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.509112 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f8cc821-2d69-4d29-8295-c35c2ca431e7" (UID: "5f8cc821-2d69-4d29-8295-c35c2ca431e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.540357 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfg7h\" (UniqueName: \"kubernetes.io/projected/5f8cc821-2d69-4d29-8295-c35c2ca431e7-kube-api-access-rfg7h\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.540411 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.540425 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f8cc821-2d69-4d29-8295-c35c2ca431e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.718280 4853 generic.go:334] "Generic (PLEG): container finished" podID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerID="34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b" exitCode=0 Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.718338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerDied","Data":"34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b"} Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.718375 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxd2w" event={"ID":"5f8cc821-2d69-4d29-8295-c35c2ca431e7","Type":"ContainerDied","Data":"a6f582f9bfc255483e2b0fafc3df3b3b148b33a4a72b82afd977bfdce6f8f78f"} Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.718398 4853 scope.go:117] "RemoveContainer" containerID="34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.718571 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxd2w" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.770514 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxd2w"] Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.773305 4853 scope.go:117] "RemoveContainer" containerID="d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.782921 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cxd2w"] Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.806433 4853 scope.go:117] "RemoveContainer" containerID="0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.858980 4853 scope.go:117] "RemoveContainer" containerID="34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b" Jan 27 19:30:49 crc kubenswrapper[4853]: E0127 19:30:49.859345 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b\": container with ID starting with 34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b not found: ID does not exist" containerID="34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.859380 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b"} err="failed to get container status \"34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b\": rpc error: code = NotFound desc = could not find container \"34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b\": container with ID starting with 34562a63197e729e92d4bc16d55d3bbeec83ed3b3b8ece34ecc5f440c17ecc6b not found: ID does not exist" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.859409 4853 scope.go:117] "RemoveContainer" containerID="d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c" Jan 27 19:30:49 crc kubenswrapper[4853]: E0127 19:30:49.860277 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c\": container with ID starting with d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c not found: ID does not exist" containerID="d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.860354 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c"} err="failed to get container status \"d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c\": rpc error: code = NotFound desc = could not find container \"d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c\": container with ID starting with d23f72959fe8c9a543d94069633aed499f70fd7d780a35eb742ad01d51ecbb0c not found: ID does not exist" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.860433 4853 scope.go:117] "RemoveContainer" containerID="0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc" Jan 27 19:30:49 crc kubenswrapper[4853]: E0127 19:30:49.861021 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc\": container with ID starting with 0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc not found: ID does not exist" containerID="0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc" Jan 27 19:30:49 crc kubenswrapper[4853]: I0127 19:30:49.861093 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc"} err="failed to get container status \"0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc\": rpc error: code = NotFound desc = could not find container \"0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc\": container with ID starting with 0dce11d507e74cec852924f056691a6fe2b7a86f76dc9eadcdc4fae4ec4fb4fc not found: ID does not exist" Jan 27 19:30:50 crc kubenswrapper[4853]: I0127 19:30:50.127580 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" path="/var/lib/kubelet/pods/5f8cc821-2d69-4d29-8295-c35c2ca431e7/volumes" Jan 27 19:31:05 crc kubenswrapper[4853]: I0127 19:31:05.541262 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:31:05 crc kubenswrapper[4853]: I0127 19:31:05.542209 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:31:35 crc kubenswrapper[4853]: I0127 19:31:35.542056 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:31:35 crc kubenswrapper[4853]: I0127 19:31:35.543046 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:31:35 crc kubenswrapper[4853]: I0127 19:31:35.543152 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:31:35 crc kubenswrapper[4853]: I0127 19:31:35.543867 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:31:35 crc kubenswrapper[4853]: I0127 19:31:35.543924 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" gracePeriod=600 Jan 27 19:31:36 crc kubenswrapper[4853]: I0127 19:31:36.178562 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" exitCode=0 Jan 27 19:31:36 crc kubenswrapper[4853]: I0127 19:31:36.178630 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938"} Jan 27 19:31:36 crc kubenswrapper[4853]: I0127 19:31:36.178673 4853 scope.go:117] "RemoveContainer" containerID="551690f0d5262de4770162d1a6ba120739293508bda19b0e91c2ce59875386da" Jan 27 19:31:36 crc kubenswrapper[4853]: E0127 19:31:36.180455 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:31:37 crc kubenswrapper[4853]: I0127 19:31:37.189617 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:31:37 crc kubenswrapper[4853]: E0127 19:31:37.190379 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:31:51 crc kubenswrapper[4853]: I0127 19:31:51.113757 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:31:51 crc kubenswrapper[4853]: E0127 19:31:51.114809 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:32:05 crc kubenswrapper[4853]: I0127 19:32:05.113636 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:32:05 crc kubenswrapper[4853]: E0127 19:32:05.114628 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:32:17 crc kubenswrapper[4853]: I0127 19:32:17.111862 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:32:17 crc kubenswrapper[4853]: E0127 19:32:17.113488 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:32:30 crc kubenswrapper[4853]: I0127 19:32:30.112372 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:32:30 crc kubenswrapper[4853]: E0127 19:32:30.113392 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:32:43 crc kubenswrapper[4853]: I0127 19:32:43.113515 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:32:43 crc kubenswrapper[4853]: E0127 19:32:43.114497 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:32:57 crc kubenswrapper[4853]: I0127 19:32:57.115909 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:32:57 crc kubenswrapper[4853]: E0127 19:32:57.117470 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:33:08 crc kubenswrapper[4853]: I0127 19:33:08.119601 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:33:08 crc kubenswrapper[4853]: E0127 19:33:08.121642 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:33:21 crc kubenswrapper[4853]: I0127 19:33:21.113448 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:33:21 crc kubenswrapper[4853]: E0127 19:33:21.114419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:33:36 crc kubenswrapper[4853]: I0127 19:33:36.112722 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:33:36 crc kubenswrapper[4853]: E0127 19:33:36.113830 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:33:51 crc kubenswrapper[4853]: I0127 19:33:51.112563 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:33:51 crc kubenswrapper[4853]: E0127 19:33:51.114978 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:34:05 crc kubenswrapper[4853]: I0127 19:34:05.113115 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:34:05 crc kubenswrapper[4853]: E0127 19:34:05.114198 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:34:17 crc kubenswrapper[4853]: I0127 19:34:17.113492 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:34:17 crc kubenswrapper[4853]: E0127 19:34:17.114641 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:34:32 crc kubenswrapper[4853]: I0127 19:34:32.112965 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:34:32 crc kubenswrapper[4853]: E0127 19:34:32.113936 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:34:47 crc kubenswrapper[4853]: I0127 19:34:47.113502 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:34:47 crc kubenswrapper[4853]: E0127 19:34:47.114718 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:34:59 crc kubenswrapper[4853]: I0127 19:34:59.112402 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:34:59 crc kubenswrapper[4853]: E0127 19:34:59.113468 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.439534 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xkdx5"] Jan 27 19:35:04 crc kubenswrapper[4853]: E0127 19:35:04.441962 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="extract-content" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.441991 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="extract-content" Jan 27 19:35:04 crc kubenswrapper[4853]: E0127 19:35:04.442056 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="registry-server" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.442067 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="registry-server" Jan 27 19:35:04 crc kubenswrapper[4853]: E0127 19:35:04.442082 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="extract-utilities" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.442089 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="extract-utilities" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.442344 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8cc821-2d69-4d29-8295-c35c2ca431e7" containerName="registry-server" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.445595 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.458981 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkdx5"] Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.615132 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-catalog-content\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.615189 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjxd\" (UniqueName: \"kubernetes.io/projected/6080aded-9751-4000-a742-bdd84bbcb9b5-kube-api-access-thjxd\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.615243 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-utilities\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.717197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-catalog-content\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.717273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thjxd\" (UniqueName: \"kubernetes.io/projected/6080aded-9751-4000-a742-bdd84bbcb9b5-kube-api-access-thjxd\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.717328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-utilities\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.718048 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-utilities\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.718319 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-catalog-content\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.739656 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thjxd\" (UniqueName: \"kubernetes.io/projected/6080aded-9751-4000-a742-bdd84bbcb9b5-kube-api-access-thjxd\") pod \"redhat-operators-xkdx5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:04 crc kubenswrapper[4853]: I0127 19:35:04.784193 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:05 crc kubenswrapper[4853]: I0127 19:35:05.295006 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkdx5"] Jan 27 19:35:05 crc kubenswrapper[4853]: I0127 19:35:05.537855 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerStarted","Data":"b16e0d057b1254c4399376561c61abcec10e4cecb6bcc6eea6265f3becbfea6f"} Jan 27 19:35:06 crc kubenswrapper[4853]: I0127 19:35:06.549620 4853 generic.go:334] "Generic (PLEG): container finished" podID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerID="99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8" exitCode=0 Jan 27 19:35:06 crc kubenswrapper[4853]: I0127 19:35:06.549729 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerDied","Data":"99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8"} Jan 27 19:35:06 crc kubenswrapper[4853]: I0127 19:35:06.551775 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:35:08 crc kubenswrapper[4853]: I0127 19:35:08.586963 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerStarted","Data":"22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f"} Jan 27 19:35:10 crc kubenswrapper[4853]: I0127 19:35:10.605940 4853 generic.go:334] "Generic (PLEG): container finished" podID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerID="22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f" exitCode=0 Jan 27 19:35:10 crc kubenswrapper[4853]: I0127 19:35:10.605980 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerDied","Data":"22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f"} Jan 27 19:35:12 crc kubenswrapper[4853]: I0127 19:35:12.113635 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:35:12 crc kubenswrapper[4853]: E0127 19:35:12.114724 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:35:12 crc kubenswrapper[4853]: I0127 19:35:12.626791 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerStarted","Data":"f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456"} Jan 27 19:35:12 crc kubenswrapper[4853]: I0127 19:35:12.646806 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xkdx5" podStartSLOduration=3.336006794 podStartE2EDuration="8.646786716s" podCreationTimestamp="2026-01-27 19:35:04 +0000 UTC" firstStartedPulling="2026-01-27 19:35:06.551556957 +0000 UTC m=+3149.014099840" lastFinishedPulling="2026-01-27 19:35:11.862336859 +0000 UTC m=+3154.324879762" observedRunningTime="2026-01-27 19:35:12.64370782 +0000 UTC m=+3155.106250703" watchObservedRunningTime="2026-01-27 19:35:12.646786716 +0000 UTC m=+3155.109329599" Jan 27 19:35:14 crc kubenswrapper[4853]: I0127 19:35:14.784611 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:14 crc kubenswrapper[4853]: I0127 19:35:14.785024 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:15 crc kubenswrapper[4853]: I0127 19:35:15.836191 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xkdx5" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="registry-server" probeResult="failure" output=< Jan 27 19:35:15 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 19:35:15 crc kubenswrapper[4853]: > Jan 27 19:35:24 crc kubenswrapper[4853]: I0127 19:35:24.837666 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:24 crc kubenswrapper[4853]: I0127 19:35:24.898445 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:25 crc kubenswrapper[4853]: I0127 19:35:25.081323 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xkdx5"] Jan 27 19:35:25 crc kubenswrapper[4853]: I0127 19:35:25.112975 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:35:25 crc kubenswrapper[4853]: E0127 19:35:25.113427 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:35:26 crc kubenswrapper[4853]: I0127 19:35:26.746751 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xkdx5" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="registry-server" containerID="cri-o://f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456" gracePeriod=2 Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.226663 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.391197 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-catalog-content\") pod \"6080aded-9751-4000-a742-bdd84bbcb9b5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.391818 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-utilities\") pod \"6080aded-9751-4000-a742-bdd84bbcb9b5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.391865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thjxd\" (UniqueName: \"kubernetes.io/projected/6080aded-9751-4000-a742-bdd84bbcb9b5-kube-api-access-thjxd\") pod \"6080aded-9751-4000-a742-bdd84bbcb9b5\" (UID: \"6080aded-9751-4000-a742-bdd84bbcb9b5\") " Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.392682 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-utilities" (OuterVolumeSpecName: "utilities") pod "6080aded-9751-4000-a742-bdd84bbcb9b5" (UID: "6080aded-9751-4000-a742-bdd84bbcb9b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.406687 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6080aded-9751-4000-a742-bdd84bbcb9b5-kube-api-access-thjxd" (OuterVolumeSpecName: "kube-api-access-thjxd") pod "6080aded-9751-4000-a742-bdd84bbcb9b5" (UID: "6080aded-9751-4000-a742-bdd84bbcb9b5"). InnerVolumeSpecName "kube-api-access-thjxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.494986 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.495032 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thjxd\" (UniqueName: \"kubernetes.io/projected/6080aded-9751-4000-a742-bdd84bbcb9b5-kube-api-access-thjxd\") on node \"crc\" DevicePath \"\"" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.510099 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6080aded-9751-4000-a742-bdd84bbcb9b5" (UID: "6080aded-9751-4000-a742-bdd84bbcb9b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.596807 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6080aded-9751-4000-a742-bdd84bbcb9b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.758060 4853 generic.go:334] "Generic (PLEG): container finished" podID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerID="f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456" exitCode=0 Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.758168 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkdx5" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.758163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerDied","Data":"f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456"} Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.759485 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkdx5" event={"ID":"6080aded-9751-4000-a742-bdd84bbcb9b5","Type":"ContainerDied","Data":"b16e0d057b1254c4399376561c61abcec10e4cecb6bcc6eea6265f3becbfea6f"} Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.759527 4853 scope.go:117] "RemoveContainer" containerID="f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.781498 4853 scope.go:117] "RemoveContainer" containerID="22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.798944 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xkdx5"] Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.808768 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xkdx5"] Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.824595 4853 scope.go:117] "RemoveContainer" containerID="99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.876701 4853 scope.go:117] "RemoveContainer" containerID="f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456" Jan 27 19:35:27 crc kubenswrapper[4853]: E0127 19:35:27.877308 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456\": container with ID starting with f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456 not found: ID does not exist" containerID="f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.877345 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456"} err="failed to get container status \"f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456\": rpc error: code = NotFound desc = could not find container \"f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456\": container with ID starting with f14f03005a1bf3a1752925132f371b0d3593c0d4edc8698181d2f3a70b4c4456 not found: ID does not exist" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.877372 4853 scope.go:117] "RemoveContainer" containerID="22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f" Jan 27 19:35:27 crc kubenswrapper[4853]: E0127 19:35:27.877854 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f\": container with ID starting with 22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f not found: ID does not exist" containerID="22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.877877 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f"} err="failed to get container status \"22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f\": rpc error: code = NotFound desc = could not find container \"22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f\": container with ID starting with 22c645b3076b2175b926f05c9f600c7d32ffc856573275115d13541ecbb0fe0f not found: ID does not exist" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.877891 4853 scope.go:117] "RemoveContainer" containerID="99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8" Jan 27 19:35:27 crc kubenswrapper[4853]: E0127 19:35:27.878199 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8\": container with ID starting with 99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8 not found: ID does not exist" containerID="99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8" Jan 27 19:35:27 crc kubenswrapper[4853]: I0127 19:35:27.878229 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8"} err="failed to get container status \"99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8\": rpc error: code = NotFound desc = could not find container \"99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8\": container with ID starting with 99bda5ce33c326283b8e30b093cfb5af36e78d2f41ab1f9663123b968b7b45b8 not found: ID does not exist" Jan 27 19:35:28 crc kubenswrapper[4853]: I0127 19:35:28.123166 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" path="/var/lib/kubelet/pods/6080aded-9751-4000-a742-bdd84bbcb9b5/volumes" Jan 27 19:35:37 crc kubenswrapper[4853]: I0127 19:35:37.112412 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:35:37 crc kubenswrapper[4853]: E0127 19:35:37.113623 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:35:51 crc kubenswrapper[4853]: I0127 19:35:51.113967 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:35:51 crc kubenswrapper[4853]: E0127 19:35:51.115334 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:36:05 crc kubenswrapper[4853]: I0127 19:36:05.112942 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:36:05 crc kubenswrapper[4853]: E0127 19:36:05.113948 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:36:17 crc kubenswrapper[4853]: I0127 19:36:17.112703 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:36:17 crc kubenswrapper[4853]: E0127 19:36:17.113516 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:36:30 crc kubenswrapper[4853]: I0127 19:36:30.112781 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:36:30 crc kubenswrapper[4853]: E0127 19:36:30.115450 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.720601 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k9gvq"] Jan 27 19:36:39 crc kubenswrapper[4853]: E0127 19:36:39.722207 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="registry-server" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.722233 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="registry-server" Jan 27 19:36:39 crc kubenswrapper[4853]: E0127 19:36:39.722257 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="extract-utilities" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.722269 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="extract-utilities" Jan 27 19:36:39 crc kubenswrapper[4853]: E0127 19:36:39.722327 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="extract-content" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.722337 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="extract-content" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.722619 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6080aded-9751-4000-a742-bdd84bbcb9b5" containerName="registry-server" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.724932 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.738004 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k9gvq"] Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.857887 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-catalog-content\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.858353 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-utilities\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.858429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x84d\" (UniqueName: \"kubernetes.io/projected/c5d86277-bbcc-48bc-9e8d-12e18678b847-kube-api-access-9x84d\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.960537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-utilities\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.960607 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x84d\" (UniqueName: \"kubernetes.io/projected/c5d86277-bbcc-48bc-9e8d-12e18678b847-kube-api-access-9x84d\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.960755 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-catalog-content\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.961182 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-utilities\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.961323 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-catalog-content\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:39 crc kubenswrapper[4853]: I0127 19:36:39.987957 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x84d\" (UniqueName: \"kubernetes.io/projected/c5d86277-bbcc-48bc-9e8d-12e18678b847-kube-api-access-9x84d\") pod \"certified-operators-k9gvq\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:40 crc kubenswrapper[4853]: I0127 19:36:40.055240 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:40 crc kubenswrapper[4853]: I0127 19:36:40.618875 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k9gvq"] Jan 27 19:36:41 crc kubenswrapper[4853]: I0127 19:36:41.515046 4853 generic.go:334] "Generic (PLEG): container finished" podID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerID="5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5" exitCode=0 Jan 27 19:36:41 crc kubenswrapper[4853]: I0127 19:36:41.515178 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerDied","Data":"5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5"} Jan 27 19:36:41 crc kubenswrapper[4853]: I0127 19:36:41.515485 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerStarted","Data":"d12445c635223bbcf3be41e900bfd2cb816b3ca837f81b781b08bb177e0a2c3a"} Jan 27 19:36:43 crc kubenswrapper[4853]: I0127 19:36:43.112230 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:36:43 crc kubenswrapper[4853]: I0127 19:36:43.538836 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerStarted","Data":"7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf"} Jan 27 19:36:43 crc kubenswrapper[4853]: I0127 19:36:43.542824 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"c96d903e95498b0e52c00f6ad32b0294747b280d823ea9daa5dd97e20f096d69"} Jan 27 19:36:44 crc kubenswrapper[4853]: I0127 19:36:44.558870 4853 generic.go:334] "Generic (PLEG): container finished" podID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerID="7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf" exitCode=0 Jan 27 19:36:44 crc kubenswrapper[4853]: I0127 19:36:44.558975 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerDied","Data":"7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf"} Jan 27 19:36:45 crc kubenswrapper[4853]: I0127 19:36:45.575269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerStarted","Data":"184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423"} Jan 27 19:36:45 crc kubenswrapper[4853]: I0127 19:36:45.609854 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k9gvq" podStartSLOduration=3.168379146 podStartE2EDuration="6.609823473s" podCreationTimestamp="2026-01-27 19:36:39 +0000 UTC" firstStartedPulling="2026-01-27 19:36:41.51780255 +0000 UTC m=+3243.980345433" lastFinishedPulling="2026-01-27 19:36:44.959246877 +0000 UTC m=+3247.421789760" observedRunningTime="2026-01-27 19:36:45.601021296 +0000 UTC m=+3248.063564199" watchObservedRunningTime="2026-01-27 19:36:45.609823473 +0000 UTC m=+3248.072366356" Jan 27 19:36:50 crc kubenswrapper[4853]: I0127 19:36:50.058299 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:50 crc kubenswrapper[4853]: I0127 19:36:50.059149 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:50 crc kubenswrapper[4853]: I0127 19:36:50.122931 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:50 crc kubenswrapper[4853]: I0127 19:36:50.679308 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:50 crc kubenswrapper[4853]: I0127 19:36:50.736968 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k9gvq"] Jan 27 19:36:52 crc kubenswrapper[4853]: I0127 19:36:52.641729 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k9gvq" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="registry-server" containerID="cri-o://184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423" gracePeriod=2 Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.139148 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.387609 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x84d\" (UniqueName: \"kubernetes.io/projected/c5d86277-bbcc-48bc-9e8d-12e18678b847-kube-api-access-9x84d\") pod \"c5d86277-bbcc-48bc-9e8d-12e18678b847\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.396593 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d86277-bbcc-48bc-9e8d-12e18678b847-kube-api-access-9x84d" (OuterVolumeSpecName: "kube-api-access-9x84d") pod "c5d86277-bbcc-48bc-9e8d-12e18678b847" (UID: "c5d86277-bbcc-48bc-9e8d-12e18678b847"). InnerVolumeSpecName "kube-api-access-9x84d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.492785 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-catalog-content\") pod \"c5d86277-bbcc-48bc-9e8d-12e18678b847\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.492974 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-utilities\") pod \"c5d86277-bbcc-48bc-9e8d-12e18678b847\" (UID: \"c5d86277-bbcc-48bc-9e8d-12e18678b847\") " Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.493487 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x84d\" (UniqueName: \"kubernetes.io/projected/c5d86277-bbcc-48bc-9e8d-12e18678b847-kube-api-access-9x84d\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.494091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-utilities" (OuterVolumeSpecName: "utilities") pod "c5d86277-bbcc-48bc-9e8d-12e18678b847" (UID: "c5d86277-bbcc-48bc-9e8d-12e18678b847"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.543884 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5d86277-bbcc-48bc-9e8d-12e18678b847" (UID: "c5d86277-bbcc-48bc-9e8d-12e18678b847"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.595759 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.595794 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5d86277-bbcc-48bc-9e8d-12e18678b847-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.652904 4853 generic.go:334] "Generic (PLEG): container finished" podID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerID="184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423" exitCode=0 Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.652982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerDied","Data":"184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423"} Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.652987 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k9gvq" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.653040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k9gvq" event={"ID":"c5d86277-bbcc-48bc-9e8d-12e18678b847","Type":"ContainerDied","Data":"d12445c635223bbcf3be41e900bfd2cb816b3ca837f81b781b08bb177e0a2c3a"} Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.653065 4853 scope.go:117] "RemoveContainer" containerID="184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.681242 4853 scope.go:117] "RemoveContainer" containerID="7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.691883 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k9gvq"] Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.702210 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k9gvq"] Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.717954 4853 scope.go:117] "RemoveContainer" containerID="5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.759604 4853 scope.go:117] "RemoveContainer" containerID="184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423" Jan 27 19:36:53 crc kubenswrapper[4853]: E0127 19:36:53.760230 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423\": container with ID starting with 184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423 not found: ID does not exist" containerID="184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.760269 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423"} err="failed to get container status \"184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423\": rpc error: code = NotFound desc = could not find container \"184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423\": container with ID starting with 184ed42361a369f3012521cff6ac80e63bb5808bfb843a4a83031a1209093423 not found: ID does not exist" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.760293 4853 scope.go:117] "RemoveContainer" containerID="7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf" Jan 27 19:36:53 crc kubenswrapper[4853]: E0127 19:36:53.760552 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf\": container with ID starting with 7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf not found: ID does not exist" containerID="7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.760590 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf"} err="failed to get container status \"7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf\": rpc error: code = NotFound desc = could not find container \"7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf\": container with ID starting with 7b87daa85de1071dceb13d8f5d4544273f5e4011b8216644ba561c9e55756daf not found: ID does not exist" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.760608 4853 scope.go:117] "RemoveContainer" containerID="5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5" Jan 27 19:36:53 crc kubenswrapper[4853]: E0127 19:36:53.761005 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5\": container with ID starting with 5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5 not found: ID does not exist" containerID="5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5" Jan 27 19:36:53 crc kubenswrapper[4853]: I0127 19:36:53.761035 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5"} err="failed to get container status \"5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5\": rpc error: code = NotFound desc = could not find container \"5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5\": container with ID starting with 5c7e6be48ba3eeb289ccf3980985d336687518587e770af4da3768aeb075efd5 not found: ID does not exist" Jan 27 19:36:54 crc kubenswrapper[4853]: I0127 19:36:54.131322 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" path="/var/lib/kubelet/pods/c5d86277-bbcc-48bc-9e8d-12e18678b847/volumes" Jan 27 19:39:05 crc kubenswrapper[4853]: I0127 19:39:05.541110 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:39:05 crc kubenswrapper[4853]: I0127 19:39:05.542494 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:39:35 crc kubenswrapper[4853]: I0127 19:39:35.541739 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:39:35 crc kubenswrapper[4853]: I0127 19:39:35.542759 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:40:05 crc kubenswrapper[4853]: I0127 19:40:05.541238 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:40:05 crc kubenswrapper[4853]: I0127 19:40:05.541909 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:40:05 crc kubenswrapper[4853]: I0127 19:40:05.541969 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:40:05 crc kubenswrapper[4853]: I0127 19:40:05.542852 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c96d903e95498b0e52c00f6ad32b0294747b280d823ea9daa5dd97e20f096d69"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:40:05 crc kubenswrapper[4853]: I0127 19:40:05.542905 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://c96d903e95498b0e52c00f6ad32b0294747b280d823ea9daa5dd97e20f096d69" gracePeriod=600 Jan 27 19:40:06 crc kubenswrapper[4853]: I0127 19:40:06.516566 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="c96d903e95498b0e52c00f6ad32b0294747b280d823ea9daa5dd97e20f096d69" exitCode=0 Jan 27 19:40:06 crc kubenswrapper[4853]: I0127 19:40:06.516653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"c96d903e95498b0e52c00f6ad32b0294747b280d823ea9daa5dd97e20f096d69"} Jan 27 19:40:06 crc kubenswrapper[4853]: I0127 19:40:06.517325 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432"} Jan 27 19:40:06 crc kubenswrapper[4853]: I0127 19:40:06.517362 4853 scope.go:117] "RemoveContainer" containerID="378216b9346c59ec4367bd411767436a12ea78535e5eec607c87075262405938" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.233156 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmh5"] Jan 27 19:40:58 crc kubenswrapper[4853]: E0127 19:40:58.235108 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="registry-server" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.235151 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="registry-server" Jan 27 19:40:58 crc kubenswrapper[4853]: E0127 19:40:58.235175 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="extract-utilities" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.235190 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="extract-utilities" Jan 27 19:40:58 crc kubenswrapper[4853]: E0127 19:40:58.235224 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="extract-content" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.235233 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="extract-content" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.235528 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d86277-bbcc-48bc-9e8d-12e18678b847" containerName="registry-server" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.237743 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.248496 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmh5"] Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.360865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-catalog-content\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.360942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742p2\" (UniqueName: \"kubernetes.io/projected/5b4bd0a7-c18f-42c7-8601-b3acd745980c-kube-api-access-742p2\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.361075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-utilities\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.463002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-catalog-content\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.463066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742p2\" (UniqueName: \"kubernetes.io/projected/5b4bd0a7-c18f-42c7-8601-b3acd745980c-kube-api-access-742p2\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.463621 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-utilities\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.464274 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-catalog-content\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.464451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-utilities\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.487066 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742p2\" (UniqueName: \"kubernetes.io/projected/5b4bd0a7-c18f-42c7-8601-b3acd745980c-kube-api-access-742p2\") pod \"redhat-marketplace-pfmh5\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:58 crc kubenswrapper[4853]: I0127 19:40:58.576228 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:40:59 crc kubenswrapper[4853]: I0127 19:40:59.135085 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmh5"] Jan 27 19:40:59 crc kubenswrapper[4853]: W0127 19:40:59.145536 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b4bd0a7_c18f_42c7_8601_b3acd745980c.slice/crio-7f0e8b25eeea49e246ed38d68f5f2b7e8290b39592335de967a58e83956652db WatchSource:0}: Error finding container 7f0e8b25eeea49e246ed38d68f5f2b7e8290b39592335de967a58e83956652db: Status 404 returned error can't find the container with id 7f0e8b25eeea49e246ed38d68f5f2b7e8290b39592335de967a58e83956652db Jan 27 19:41:00 crc kubenswrapper[4853]: I0127 19:41:00.060484 4853 generic.go:334] "Generic (PLEG): container finished" podID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerID="8172846db9f7ff573cd3477b856ed4a94a9272956f6e4a7e8a34691971ee6ad0" exitCode=0 Jan 27 19:41:00 crc kubenswrapper[4853]: I0127 19:41:00.060636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmh5" event={"ID":"5b4bd0a7-c18f-42c7-8601-b3acd745980c","Type":"ContainerDied","Data":"8172846db9f7ff573cd3477b856ed4a94a9272956f6e4a7e8a34691971ee6ad0"} Jan 27 19:41:00 crc kubenswrapper[4853]: I0127 19:41:00.061099 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmh5" event={"ID":"5b4bd0a7-c18f-42c7-8601-b3acd745980c","Type":"ContainerStarted","Data":"7f0e8b25eeea49e246ed38d68f5f2b7e8290b39592335de967a58e83956652db"} Jan 27 19:41:00 crc kubenswrapper[4853]: I0127 19:41:00.063063 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.093072 4853 generic.go:334] "Generic (PLEG): container finished" podID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerID="4c9225908159112392212b26c08a84ed2ea1d0f27763a9fe8134c9e33371ffdc" exitCode=0 Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.093301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmh5" event={"ID":"5b4bd0a7-c18f-42c7-8601-b3acd745980c","Type":"ContainerDied","Data":"4c9225908159112392212b26c08a84ed2ea1d0f27763a9fe8134c9e33371ffdc"} Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.816465 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5ppsb"] Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.825700 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.840890 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ppsb"] Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.984549 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91a8685-4537-45c7-bb32-30b4885322b6-catalog-content\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.984732 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmzx\" (UniqueName: \"kubernetes.io/projected/a91a8685-4537-45c7-bb32-30b4885322b6-kube-api-access-9mmzx\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:02 crc kubenswrapper[4853]: I0127 19:41:02.984753 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91a8685-4537-45c7-bb32-30b4885322b6-utilities\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.087057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mmzx\" (UniqueName: \"kubernetes.io/projected/a91a8685-4537-45c7-bb32-30b4885322b6-kube-api-access-9mmzx\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.087177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91a8685-4537-45c7-bb32-30b4885322b6-utilities\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.087333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91a8685-4537-45c7-bb32-30b4885322b6-catalog-content\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.087793 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91a8685-4537-45c7-bb32-30b4885322b6-catalog-content\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.088225 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91a8685-4537-45c7-bb32-30b4885322b6-utilities\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.106734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmh5" event={"ID":"5b4bd0a7-c18f-42c7-8601-b3acd745980c","Type":"ContainerStarted","Data":"47030d20f73d887bfa132f73c548037c7413a969a0f91459dd21c6eecd784fcb"} Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.118452 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mmzx\" (UniqueName: \"kubernetes.io/projected/a91a8685-4537-45c7-bb32-30b4885322b6-kube-api-access-9mmzx\") pod \"community-operators-5ppsb\" (UID: \"a91a8685-4537-45c7-bb32-30b4885322b6\") " pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.135376 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pfmh5" podStartSLOduration=2.60606952 podStartE2EDuration="5.135353915s" podCreationTimestamp="2026-01-27 19:40:58 +0000 UTC" firstStartedPulling="2026-01-27 19:41:00.062755971 +0000 UTC m=+3502.525298854" lastFinishedPulling="2026-01-27 19:41:02.592040366 +0000 UTC m=+3505.054583249" observedRunningTime="2026-01-27 19:41:03.12851919 +0000 UTC m=+3505.591062073" watchObservedRunningTime="2026-01-27 19:41:03.135353915 +0000 UTC m=+3505.597896798" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.188662 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:03 crc kubenswrapper[4853]: I0127 19:41:03.781892 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ppsb"] Jan 27 19:41:04 crc kubenswrapper[4853]: I0127 19:41:04.122224 4853 generic.go:334] "Generic (PLEG): container finished" podID="a91a8685-4537-45c7-bb32-30b4885322b6" containerID="ed393899ce70f937b876d88331fb4bab87ee947179e92a4ce204b97213b5d778" exitCode=0 Jan 27 19:41:04 crc kubenswrapper[4853]: I0127 19:41:04.125456 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ppsb" event={"ID":"a91a8685-4537-45c7-bb32-30b4885322b6","Type":"ContainerDied","Data":"ed393899ce70f937b876d88331fb4bab87ee947179e92a4ce204b97213b5d778"} Jan 27 19:41:04 crc kubenswrapper[4853]: I0127 19:41:04.125523 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ppsb" event={"ID":"a91a8685-4537-45c7-bb32-30b4885322b6","Type":"ContainerStarted","Data":"f3db727633c77884653df0bef211d80ecce224f03bb7979cc824fcb508b43f33"} Jan 27 19:41:08 crc kubenswrapper[4853]: I0127 19:41:08.164730 4853 generic.go:334] "Generic (PLEG): container finished" podID="6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" containerID="ace899c42d69736fd5bb5e1192e36a46aa89508f24b10b5a764b05658cc74463" exitCode=0 Jan 27 19:41:08 crc kubenswrapper[4853]: I0127 19:41:08.164824 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca","Type":"ContainerDied","Data":"ace899c42d69736fd5bb5e1192e36a46aa89508f24b10b5a764b05658cc74463"} Jan 27 19:41:08 crc kubenswrapper[4853]: I0127 19:41:08.577528 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:41:08 crc kubenswrapper[4853]: I0127 19:41:08.577593 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:41:08 crc kubenswrapper[4853]: I0127 19:41:08.628109 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.177473 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ppsb" event={"ID":"a91a8685-4537-45c7-bb32-30b4885322b6","Type":"ContainerStarted","Data":"92e307807f6c09239bfb9465a52321981db4c185d49cdc86d993e9949e821c46"} Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.239992 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.569145 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638015 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config-secret\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638076 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638101 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ca-certs\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638136 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ssh-key\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638179 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638201 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-config-data\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638228 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztd7q\" (UniqueName: \"kubernetes.io/projected/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-kube-api-access-ztd7q\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-temporary\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.638405 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-workdir\") pod \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\" (UID: \"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca\") " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.639075 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-config-data" (OuterVolumeSpecName: "config-data") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.639281 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.642570 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.655005 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-kube-api-access-ztd7q" (OuterVolumeSpecName: "kube-api-access-ztd7q") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "kube-api-access-ztd7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.655299 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.669580 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.671020 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.671495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.689706 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" (UID: "6275c0bd-3255-4c3d-88bc-30f5d1ee27ca"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739795 4853 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739833 4853 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739848 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739947 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739961 4853 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739972 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739981 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739989 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.739997 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztd7q\" (UniqueName: \"kubernetes.io/projected/6275c0bd-3255-4c3d-88bc-30f5d1ee27ca-kube-api-access-ztd7q\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.770305 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.841100 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:09 crc kubenswrapper[4853]: I0127 19:41:09.882907 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmh5"] Jan 27 19:41:10 crc kubenswrapper[4853]: I0127 19:41:10.186914 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 19:41:10 crc kubenswrapper[4853]: I0127 19:41:10.186912 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6275c0bd-3255-4c3d-88bc-30f5d1ee27ca","Type":"ContainerDied","Data":"e5730f9c2f89bc08094692e0132ae5233228cbd99aabbdd9be0a06fef21d5807"} Jan 27 19:41:10 crc kubenswrapper[4853]: I0127 19:41:10.186973 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5730f9c2f89bc08094692e0132ae5233228cbd99aabbdd9be0a06fef21d5807" Jan 27 19:41:10 crc kubenswrapper[4853]: I0127 19:41:10.191461 4853 generic.go:334] "Generic (PLEG): container finished" podID="a91a8685-4537-45c7-bb32-30b4885322b6" containerID="92e307807f6c09239bfb9465a52321981db4c185d49cdc86d993e9949e821c46" exitCode=0 Jan 27 19:41:10 crc kubenswrapper[4853]: I0127 19:41:10.191553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ppsb" event={"ID":"a91a8685-4537-45c7-bb32-30b4885322b6","Type":"ContainerDied","Data":"92e307807f6c09239bfb9465a52321981db4c185d49cdc86d993e9949e821c46"} Jan 27 19:41:11 crc kubenswrapper[4853]: I0127 19:41:11.201319 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pfmh5" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="registry-server" containerID="cri-o://47030d20f73d887bfa132f73c548037c7413a969a0f91459dd21c6eecd784fcb" gracePeriod=2 Jan 27 19:41:12 crc kubenswrapper[4853]: I0127 19:41:12.212320 4853 generic.go:334] "Generic (PLEG): container finished" podID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerID="47030d20f73d887bfa132f73c548037c7413a969a0f91459dd21c6eecd784fcb" exitCode=0 Jan 27 19:41:12 crc kubenswrapper[4853]: I0127 19:41:12.212387 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmh5" event={"ID":"5b4bd0a7-c18f-42c7-8601-b3acd745980c","Type":"ContainerDied","Data":"47030d20f73d887bfa132f73c548037c7413a969a0f91459dd21c6eecd784fcb"} Jan 27 19:41:12 crc kubenswrapper[4853]: I0127 19:41:12.925166 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.126507 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-utilities\") pod \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.126588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-742p2\" (UniqueName: \"kubernetes.io/projected/5b4bd0a7-c18f-42c7-8601-b3acd745980c-kube-api-access-742p2\") pod \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.126680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-catalog-content\") pod \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\" (UID: \"5b4bd0a7-c18f-42c7-8601-b3acd745980c\") " Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.128151 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-utilities" (OuterVolumeSpecName: "utilities") pod "5b4bd0a7-c18f-42c7-8601-b3acd745980c" (UID: "5b4bd0a7-c18f-42c7-8601-b3acd745980c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.137465 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4bd0a7-c18f-42c7-8601-b3acd745980c-kube-api-access-742p2" (OuterVolumeSpecName: "kube-api-access-742p2") pod "5b4bd0a7-c18f-42c7-8601-b3acd745980c" (UID: "5b4bd0a7-c18f-42c7-8601-b3acd745980c"). InnerVolumeSpecName "kube-api-access-742p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.148439 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b4bd0a7-c18f-42c7-8601-b3acd745980c" (UID: "5b4bd0a7-c18f-42c7-8601-b3acd745980c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.224130 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmh5" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.224131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmh5" event={"ID":"5b4bd0a7-c18f-42c7-8601-b3acd745980c","Type":"ContainerDied","Data":"7f0e8b25eeea49e246ed38d68f5f2b7e8290b39592335de967a58e83956652db"} Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.224275 4853 scope.go:117] "RemoveContainer" containerID="47030d20f73d887bfa132f73c548037c7413a969a0f91459dd21c6eecd784fcb" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.228162 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ppsb" event={"ID":"a91a8685-4537-45c7-bb32-30b4885322b6","Type":"ContainerStarted","Data":"d6db05fdb5f87be88245ff968a44ca164ace2f4fca2d7ca058edf7dc089fff99"} Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.229892 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.229929 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-742p2\" (UniqueName: \"kubernetes.io/projected/5b4bd0a7-c18f-42c7-8601-b3acd745980c-kube-api-access-742p2\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.229944 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4bd0a7-c18f-42c7-8601-b3acd745980c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.252211 4853 scope.go:117] "RemoveContainer" containerID="4c9225908159112392212b26c08a84ed2ea1d0f27763a9fe8134c9e33371ffdc" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.262564 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5ppsb" podStartSLOduration=2.91254077 podStartE2EDuration="11.262409967s" podCreationTimestamp="2026-01-27 19:41:02 +0000 UTC" firstStartedPulling="2026-01-27 19:41:04.125865277 +0000 UTC m=+3506.588408160" lastFinishedPulling="2026-01-27 19:41:12.475734484 +0000 UTC m=+3514.938277357" observedRunningTime="2026-01-27 19:41:13.254044709 +0000 UTC m=+3515.716587592" watchObservedRunningTime="2026-01-27 19:41:13.262409967 +0000 UTC m=+3515.724952850" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.284740 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmh5"] Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.290850 4853 scope.go:117] "RemoveContainer" containerID="8172846db9f7ff573cd3477b856ed4a94a9272956f6e4a7e8a34691971ee6ad0" Jan 27 19:41:13 crc kubenswrapper[4853]: I0127 19:41:13.296104 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmh5"] Jan 27 19:41:14 crc kubenswrapper[4853]: I0127 19:41:14.124542 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" path="/var/lib/kubelet/pods/5b4bd0a7-c18f-42c7-8601-b3acd745980c/volumes" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.225422 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:41:21 crc kubenswrapper[4853]: E0127 19:41:21.226673 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="registry-server" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.226693 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="registry-server" Jan 27 19:41:21 crc kubenswrapper[4853]: E0127 19:41:21.226709 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.226717 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:41:21 crc kubenswrapper[4853]: E0127 19:41:21.226742 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="extract-content" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.226749 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="extract-content" Jan 27 19:41:21 crc kubenswrapper[4853]: E0127 19:41:21.226782 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="extract-utilities" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.226789 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="extract-utilities" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.226990 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4bd0a7-c18f-42c7-8601-b3acd745980c" containerName="registry-server" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.227005 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6275c0bd-3255-4c3d-88bc-30f5d1ee27ca" containerName="tempest-tests-tempest-tests-runner" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.227710 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.231054 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-h7m2w" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.236585 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.426596 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.426965 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bh25\" (UniqueName: \"kubernetes.io/projected/34624963-57cc-4683-b919-e1b2e1183b0a-kube-api-access-8bh25\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.529540 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.529623 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bh25\" (UniqueName: \"kubernetes.io/projected/34624963-57cc-4683-b919-e1b2e1183b0a-kube-api-access-8bh25\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.530337 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.549670 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bh25\" (UniqueName: \"kubernetes.io/projected/34624963-57cc-4683-b919-e1b2e1183b0a-kube-api-access-8bh25\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.563171 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"34624963-57cc-4683-b919-e1b2e1183b0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:21 crc kubenswrapper[4853]: I0127 19:41:21.594196 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 19:41:22 crc kubenswrapper[4853]: I0127 19:41:22.017707 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 19:41:22 crc kubenswrapper[4853]: I0127 19:41:22.344793 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"34624963-57cc-4683-b919-e1b2e1183b0a","Type":"ContainerStarted","Data":"d7e70626f2a73d983aae6b76cd5cebc1958a5a334fe9c41830d631b14570f299"} Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.189091 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.189493 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.254091 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.419545 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5ppsb" Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.535312 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ppsb"] Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.588344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rndjv"] Jan 27 19:41:23 crc kubenswrapper[4853]: I0127 19:41:23.588614 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rndjv" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="registry-server" containerID="cri-o://6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8" gracePeriod=2 Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.106091 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rndjv" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.192506 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmq88\" (UniqueName: \"kubernetes.io/projected/376dda10-dbbe-4b02-ba77-def58ad1db42-kube-api-access-pmq88\") pod \"376dda10-dbbe-4b02-ba77-def58ad1db42\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.192779 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-catalog-content\") pod \"376dda10-dbbe-4b02-ba77-def58ad1db42\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.192846 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-utilities\") pod \"376dda10-dbbe-4b02-ba77-def58ad1db42\" (UID: \"376dda10-dbbe-4b02-ba77-def58ad1db42\") " Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.195090 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-utilities" (OuterVolumeSpecName: "utilities") pod "376dda10-dbbe-4b02-ba77-def58ad1db42" (UID: "376dda10-dbbe-4b02-ba77-def58ad1db42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.199744 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376dda10-dbbe-4b02-ba77-def58ad1db42-kube-api-access-pmq88" (OuterVolumeSpecName: "kube-api-access-pmq88") pod "376dda10-dbbe-4b02-ba77-def58ad1db42" (UID: "376dda10-dbbe-4b02-ba77-def58ad1db42"). InnerVolumeSpecName "kube-api-access-pmq88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.274410 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "376dda10-dbbe-4b02-ba77-def58ad1db42" (UID: "376dda10-dbbe-4b02-ba77-def58ad1db42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.295707 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmq88\" (UniqueName: \"kubernetes.io/projected/376dda10-dbbe-4b02-ba77-def58ad1db42-kube-api-access-pmq88\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.295743 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.295818 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376dda10-dbbe-4b02-ba77-def58ad1db42-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.367849 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"34624963-57cc-4683-b919-e1b2e1183b0a","Type":"ContainerStarted","Data":"2d43830dce5a2b911cf37ab381a3ce06ccdfdf94cd0211b4eee03ebf77c8803d"} Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.370458 4853 generic.go:334] "Generic (PLEG): container finished" podID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerID="6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8" exitCode=0 Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.370542 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rndjv" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.370537 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rndjv" event={"ID":"376dda10-dbbe-4b02-ba77-def58ad1db42","Type":"ContainerDied","Data":"6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8"} Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.370765 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rndjv" event={"ID":"376dda10-dbbe-4b02-ba77-def58ad1db42","Type":"ContainerDied","Data":"470a3c49d66792856e2fb6eb169ee169fab197c9915546dd96d960b98792ff97"} Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.370797 4853 scope.go:117] "RemoveContainer" containerID="6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.382890 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.206597766 podStartE2EDuration="3.382867974s" podCreationTimestamp="2026-01-27 19:41:21 +0000 UTC" firstStartedPulling="2026-01-27 19:41:22.027413105 +0000 UTC m=+3524.489955988" lastFinishedPulling="2026-01-27 19:41:23.203683313 +0000 UTC m=+3525.666226196" observedRunningTime="2026-01-27 19:41:24.380816755 +0000 UTC m=+3526.843359638" watchObservedRunningTime="2026-01-27 19:41:24.382867974 +0000 UTC m=+3526.845410857" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.400638 4853 scope.go:117] "RemoveContainer" containerID="dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.445143 4853 scope.go:117] "RemoveContainer" containerID="33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.451150 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rndjv"] Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.462024 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rndjv"] Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.478636 4853 scope.go:117] "RemoveContainer" containerID="6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8" Jan 27 19:41:24 crc kubenswrapper[4853]: E0127 19:41:24.480820 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8\": container with ID starting with 6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8 not found: ID does not exist" containerID="6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.480852 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8"} err="failed to get container status \"6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8\": rpc error: code = NotFound desc = could not find container \"6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8\": container with ID starting with 6f5f1de35ba36dcfbc393ad843072d7f10c0b5eed06d43f05d14fb0452085cd8 not found: ID does not exist" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.480908 4853 scope.go:117] "RemoveContainer" containerID="dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0" Jan 27 19:41:24 crc kubenswrapper[4853]: E0127 19:41:24.481188 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0\": container with ID starting with dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0 not found: ID does not exist" containerID="dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.481211 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0"} err="failed to get container status \"dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0\": rpc error: code = NotFound desc = could not find container \"dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0\": container with ID starting with dd71f5fe7d12ebda1034002fcba6860758b1e0f53ce5d352c87dcf7bb8cb2de0 not found: ID does not exist" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.481229 4853 scope.go:117] "RemoveContainer" containerID="33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5" Jan 27 19:41:24 crc kubenswrapper[4853]: E0127 19:41:24.481431 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5\": container with ID starting with 33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5 not found: ID does not exist" containerID="33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5" Jan 27 19:41:24 crc kubenswrapper[4853]: I0127 19:41:24.481451 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5"} err="failed to get container status \"33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5\": rpc error: code = NotFound desc = could not find container \"33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5\": container with ID starting with 33cd5783d8aff82fd2a36162cb3dc6ad1156642d6b95c6d9df755fa1122b92d5 not found: ID does not exist" Jan 27 19:41:26 crc kubenswrapper[4853]: I0127 19:41:26.125350 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" path="/var/lib/kubelet/pods/376dda10-dbbe-4b02-ba77-def58ad1db42/volumes" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.271030 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8np9p/must-gather-n65kf"] Jan 27 19:41:46 crc kubenswrapper[4853]: E0127 19:41:46.272599 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="registry-server" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.272622 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="registry-server" Jan 27 19:41:46 crc kubenswrapper[4853]: E0127 19:41:46.272646 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="extract-utilities" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.272655 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="extract-utilities" Jan 27 19:41:46 crc kubenswrapper[4853]: E0127 19:41:46.272693 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="extract-content" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.272704 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="extract-content" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.272950 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="376dda10-dbbe-4b02-ba77-def58ad1db42" containerName="registry-server" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.274386 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.279179 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8np9p"/"openshift-service-ca.crt" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.281282 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8np9p"/"kube-root-ca.crt" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.285079 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8np9p/must-gather-n65kf"] Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.288762 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8np9p"/"default-dockercfg-567pz" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.389808 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd85l\" (UniqueName: \"kubernetes.io/projected/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-kube-api-access-rd85l\") pod \"must-gather-n65kf\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.389963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-must-gather-output\") pod \"must-gather-n65kf\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.493729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd85l\" (UniqueName: \"kubernetes.io/projected/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-kube-api-access-rd85l\") pod \"must-gather-n65kf\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.494137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-must-gather-output\") pod \"must-gather-n65kf\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.494711 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-must-gather-output\") pod \"must-gather-n65kf\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.528739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd85l\" (UniqueName: \"kubernetes.io/projected/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-kube-api-access-rd85l\") pod \"must-gather-n65kf\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:46 crc kubenswrapper[4853]: I0127 19:41:46.607699 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:41:47 crc kubenswrapper[4853]: I0127 19:41:47.139111 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8np9p/must-gather-n65kf"] Jan 27 19:41:47 crc kubenswrapper[4853]: I0127 19:41:47.610075 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/must-gather-n65kf" event={"ID":"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65","Type":"ContainerStarted","Data":"b96fefd01fb7f3fdccef79af8e67804411378bf0523ba1373605f87797f96909"} Jan 27 19:41:57 crc kubenswrapper[4853]: I0127 19:41:57.720734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/must-gather-n65kf" event={"ID":"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65","Type":"ContainerStarted","Data":"f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368"} Jan 27 19:41:58 crc kubenswrapper[4853]: I0127 19:41:58.737715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/must-gather-n65kf" event={"ID":"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65","Type":"ContainerStarted","Data":"e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92"} Jan 27 19:41:58 crc kubenswrapper[4853]: I0127 19:41:58.770733 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8np9p/must-gather-n65kf" podStartSLOduration=2.518195539 podStartE2EDuration="12.770706321s" podCreationTimestamp="2026-01-27 19:41:46 +0000 UTC" firstStartedPulling="2026-01-27 19:41:47.134436147 +0000 UTC m=+3549.596979030" lastFinishedPulling="2026-01-27 19:41:57.386946929 +0000 UTC m=+3559.849489812" observedRunningTime="2026-01-27 19:41:58.764776253 +0000 UTC m=+3561.227319156" watchObservedRunningTime="2026-01-27 19:41:58.770706321 +0000 UTC m=+3561.233249204" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.235548 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8np9p/crc-debug-bjbqd"] Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.237698 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.273207 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-648x4\" (UniqueName: \"kubernetes.io/projected/debb1633-526f-496e-92e1-6ed3b9c7cfa7-kube-api-access-648x4\") pod \"crc-debug-bjbqd\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.273722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/debb1633-526f-496e-92e1-6ed3b9c7cfa7-host\") pod \"crc-debug-bjbqd\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.376290 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/debb1633-526f-496e-92e1-6ed3b9c7cfa7-host\") pod \"crc-debug-bjbqd\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.376410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/debb1633-526f-496e-92e1-6ed3b9c7cfa7-host\") pod \"crc-debug-bjbqd\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.376431 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-648x4\" (UniqueName: \"kubernetes.io/projected/debb1633-526f-496e-92e1-6ed3b9c7cfa7-kube-api-access-648x4\") pod \"crc-debug-bjbqd\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.407591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-648x4\" (UniqueName: \"kubernetes.io/projected/debb1633-526f-496e-92e1-6ed3b9c7cfa7-kube-api-access-648x4\") pod \"crc-debug-bjbqd\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.556287 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:42:01 crc kubenswrapper[4853]: W0127 19:42:01.595202 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebb1633_526f_496e_92e1_6ed3b9c7cfa7.slice/crio-e75942bd9e931268239b9ea77057e0bdb1348622e50c650b21a5c8770fe3de1d WatchSource:0}: Error finding container e75942bd9e931268239b9ea77057e0bdb1348622e50c650b21a5c8770fe3de1d: Status 404 returned error can't find the container with id e75942bd9e931268239b9ea77057e0bdb1348622e50c650b21a5c8770fe3de1d Jan 27 19:42:01 crc kubenswrapper[4853]: I0127 19:42:01.765359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" event={"ID":"debb1633-526f-496e-92e1-6ed3b9c7cfa7","Type":"ContainerStarted","Data":"e75942bd9e931268239b9ea77057e0bdb1348622e50c650b21a5c8770fe3de1d"} Jan 27 19:42:05 crc kubenswrapper[4853]: I0127 19:42:05.541282 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:42:05 crc kubenswrapper[4853]: I0127 19:42:05.541928 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:14 crc kubenswrapper[4853]: I0127 19:42:14.929787 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" event={"ID":"debb1633-526f-496e-92e1-6ed3b9c7cfa7","Type":"ContainerStarted","Data":"09ebc4cb775b3ade8677252e3ce83cb5d2669ee20cac01188bfb7c936f6f601b"} Jan 27 19:42:14 crc kubenswrapper[4853]: I0127 19:42:14.950390 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" podStartSLOduration=1.361515579 podStartE2EDuration="13.950368547s" podCreationTimestamp="2026-01-27 19:42:01 +0000 UTC" firstStartedPulling="2026-01-27 19:42:01.598046937 +0000 UTC m=+3564.060589820" lastFinishedPulling="2026-01-27 19:42:14.186899905 +0000 UTC m=+3576.649442788" observedRunningTime="2026-01-27 19:42:14.941798643 +0000 UTC m=+3577.404341516" watchObservedRunningTime="2026-01-27 19:42:14.950368547 +0000 UTC m=+3577.412911430" Jan 27 19:42:35 crc kubenswrapper[4853]: I0127 19:42:35.541038 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:42:35 crc kubenswrapper[4853]: I0127 19:42:35.541819 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.541948 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.542639 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.542705 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.543711 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.543775 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" gracePeriod=600 Jan 27 19:43:05 crc kubenswrapper[4853]: E0127 19:43:05.668189 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.723422 4853 generic.go:334] "Generic (PLEG): container finished" podID="debb1633-526f-496e-92e1-6ed3b9c7cfa7" containerID="09ebc4cb775b3ade8677252e3ce83cb5d2669ee20cac01188bfb7c936f6f601b" exitCode=0 Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.723513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" event={"ID":"debb1633-526f-496e-92e1-6ed3b9c7cfa7","Type":"ContainerDied","Data":"09ebc4cb775b3ade8677252e3ce83cb5d2669ee20cac01188bfb7c936f6f601b"} Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.726253 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" exitCode=0 Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.726317 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432"} Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.726398 4853 scope.go:117] "RemoveContainer" containerID="c96d903e95498b0e52c00f6ad32b0294747b280d823ea9daa5dd97e20f096d69" Jan 27 19:43:05 crc kubenswrapper[4853]: I0127 19:43:05.727109 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:43:05 crc kubenswrapper[4853]: E0127 19:43:05.727486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.848840 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.891139 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8np9p/crc-debug-bjbqd"] Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.900498 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8np9p/crc-debug-bjbqd"] Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.989090 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-648x4\" (UniqueName: \"kubernetes.io/projected/debb1633-526f-496e-92e1-6ed3b9c7cfa7-kube-api-access-648x4\") pod \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.989212 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/debb1633-526f-496e-92e1-6ed3b9c7cfa7-host\") pod \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\" (UID: \"debb1633-526f-496e-92e1-6ed3b9c7cfa7\") " Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.989349 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/debb1633-526f-496e-92e1-6ed3b9c7cfa7-host" (OuterVolumeSpecName: "host") pod "debb1633-526f-496e-92e1-6ed3b9c7cfa7" (UID: "debb1633-526f-496e-92e1-6ed3b9c7cfa7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.989925 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/debb1633-526f-496e-92e1-6ed3b9c7cfa7-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:06 crc kubenswrapper[4853]: I0127 19:43:06.994907 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debb1633-526f-496e-92e1-6ed3b9c7cfa7-kube-api-access-648x4" (OuterVolumeSpecName: "kube-api-access-648x4") pod "debb1633-526f-496e-92e1-6ed3b9c7cfa7" (UID: "debb1633-526f-496e-92e1-6ed3b9c7cfa7"). InnerVolumeSpecName "kube-api-access-648x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:07 crc kubenswrapper[4853]: I0127 19:43:07.091884 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-648x4\" (UniqueName: \"kubernetes.io/projected/debb1633-526f-496e-92e1-6ed3b9c7cfa7-kube-api-access-648x4\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:07 crc kubenswrapper[4853]: I0127 19:43:07.747104 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75942bd9e931268239b9ea77057e0bdb1348622e50c650b21a5c8770fe3de1d" Jan 27 19:43:07 crc kubenswrapper[4853]: I0127 19:43:07.747191 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-bjbqd" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.040798 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8np9p/crc-debug-rd5x9"] Jan 27 19:43:08 crc kubenswrapper[4853]: E0127 19:43:08.041257 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debb1633-526f-496e-92e1-6ed3b9c7cfa7" containerName="container-00" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.041270 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="debb1633-526f-496e-92e1-6ed3b9c7cfa7" containerName="container-00" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.041486 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="debb1633-526f-496e-92e1-6ed3b9c7cfa7" containerName="container-00" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.042496 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.123032 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debb1633-526f-496e-92e1-6ed3b9c7cfa7" path="/var/lib/kubelet/pods/debb1633-526f-496e-92e1-6ed3b9c7cfa7/volumes" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.212911 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crdq\" (UniqueName: \"kubernetes.io/projected/8418e7c8-2889-4e4a-a118-f5dd099d5fab-kube-api-access-9crdq\") pod \"crc-debug-rd5x9\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.213546 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8418e7c8-2889-4e4a-a118-f5dd099d5fab-host\") pod \"crc-debug-rd5x9\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.316193 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8418e7c8-2889-4e4a-a118-f5dd099d5fab-host\") pod \"crc-debug-rd5x9\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.316319 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crdq\" (UniqueName: \"kubernetes.io/projected/8418e7c8-2889-4e4a-a118-f5dd099d5fab-kube-api-access-9crdq\") pod \"crc-debug-rd5x9\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.316368 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8418e7c8-2889-4e4a-a118-f5dd099d5fab-host\") pod \"crc-debug-rd5x9\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.334147 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crdq\" (UniqueName: \"kubernetes.io/projected/8418e7c8-2889-4e4a-a118-f5dd099d5fab-kube-api-access-9crdq\") pod \"crc-debug-rd5x9\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.375544 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.759759 4853 generic.go:334] "Generic (PLEG): container finished" podID="8418e7c8-2889-4e4a-a118-f5dd099d5fab" containerID="bd5222bdb3ecd97bd1a980a4891d966d2e7fb4134200b3b87ba78396f845d309" exitCode=0 Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.759907 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-rd5x9" event={"ID":"8418e7c8-2889-4e4a-a118-f5dd099d5fab","Type":"ContainerDied","Data":"bd5222bdb3ecd97bd1a980a4891d966d2e7fb4134200b3b87ba78396f845d309"} Jan 27 19:43:08 crc kubenswrapper[4853]: I0127 19:43:08.760385 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-rd5x9" event={"ID":"8418e7c8-2889-4e4a-a118-f5dd099d5fab","Type":"ContainerStarted","Data":"ec2fe503e79374a0057ec71022d89e72f6e2b7b41a51b365afc4069b076790c1"} Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.316746 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8np9p/crc-debug-rd5x9"] Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.327503 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8np9p/crc-debug-rd5x9"] Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.900137 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.953363 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crdq\" (UniqueName: \"kubernetes.io/projected/8418e7c8-2889-4e4a-a118-f5dd099d5fab-kube-api-access-9crdq\") pod \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.953976 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8418e7c8-2889-4e4a-a118-f5dd099d5fab-host\") pod \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\" (UID: \"8418e7c8-2889-4e4a-a118-f5dd099d5fab\") " Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.954081 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8418e7c8-2889-4e4a-a118-f5dd099d5fab-host" (OuterVolumeSpecName: "host") pod "8418e7c8-2889-4e4a-a118-f5dd099d5fab" (UID: "8418e7c8-2889-4e4a-a118-f5dd099d5fab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.954814 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8418e7c8-2889-4e4a-a118-f5dd099d5fab-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:09 crc kubenswrapper[4853]: I0127 19:43:09.964543 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8418e7c8-2889-4e4a-a118-f5dd099d5fab-kube-api-access-9crdq" (OuterVolumeSpecName: "kube-api-access-9crdq") pod "8418e7c8-2889-4e4a-a118-f5dd099d5fab" (UID: "8418e7c8-2889-4e4a-a118-f5dd099d5fab"). InnerVolumeSpecName "kube-api-access-9crdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.057134 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crdq\" (UniqueName: \"kubernetes.io/projected/8418e7c8-2889-4e4a-a118-f5dd099d5fab-kube-api-access-9crdq\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.124578 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8418e7c8-2889-4e4a-a118-f5dd099d5fab" path="/var/lib/kubelet/pods/8418e7c8-2889-4e4a-a118-f5dd099d5fab/volumes" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.502367 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8np9p/crc-debug-xnjl9"] Jan 27 19:43:10 crc kubenswrapper[4853]: E0127 19:43:10.502896 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8418e7c8-2889-4e4a-a118-f5dd099d5fab" containerName="container-00" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.502915 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8418e7c8-2889-4e4a-a118-f5dd099d5fab" containerName="container-00" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.503110 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8418e7c8-2889-4e4a-a118-f5dd099d5fab" containerName="container-00" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.503872 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.566258 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6nk\" (UniqueName: \"kubernetes.io/projected/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-kube-api-access-xv6nk\") pod \"crc-debug-xnjl9\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.566807 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-host\") pod \"crc-debug-xnjl9\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.670272 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv6nk\" (UniqueName: \"kubernetes.io/projected/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-kube-api-access-xv6nk\") pod \"crc-debug-xnjl9\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.670872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-host\") pod \"crc-debug-xnjl9\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.671030 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-host\") pod \"crc-debug-xnjl9\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.692963 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv6nk\" (UniqueName: \"kubernetes.io/projected/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-kube-api-access-xv6nk\") pod \"crc-debug-xnjl9\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.804093 4853 scope.go:117] "RemoveContainer" containerID="bd5222bdb3ecd97bd1a980a4891d966d2e7fb4134200b3b87ba78396f845d309" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.804267 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-rd5x9" Jan 27 19:43:10 crc kubenswrapper[4853]: I0127 19:43:10.825152 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:10 crc kubenswrapper[4853]: W0127 19:43:10.866216 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d60993_8fac_4fe9_ad2d_e45f0596bdb8.slice/crio-a86984d305d5a981700256561f06fa5f01cfc687f2770aebaf2cb1b099674510 WatchSource:0}: Error finding container a86984d305d5a981700256561f06fa5f01cfc687f2770aebaf2cb1b099674510: Status 404 returned error can't find the container with id a86984d305d5a981700256561f06fa5f01cfc687f2770aebaf2cb1b099674510 Jan 27 19:43:11 crc kubenswrapper[4853]: I0127 19:43:11.817747 4853 generic.go:334] "Generic (PLEG): container finished" podID="b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" containerID="08c2694c6cc681e2b5027bb44f429d8dca3b7112cf29c76f49bd90c80cabff71" exitCode=0 Jan 27 19:43:11 crc kubenswrapper[4853]: I0127 19:43:11.817828 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-xnjl9" event={"ID":"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8","Type":"ContainerDied","Data":"08c2694c6cc681e2b5027bb44f429d8dca3b7112cf29c76f49bd90c80cabff71"} Jan 27 19:43:11 crc kubenswrapper[4853]: I0127 19:43:11.818181 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/crc-debug-xnjl9" event={"ID":"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8","Type":"ContainerStarted","Data":"a86984d305d5a981700256561f06fa5f01cfc687f2770aebaf2cb1b099674510"} Jan 27 19:43:11 crc kubenswrapper[4853]: I0127 19:43:11.862003 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8np9p/crc-debug-xnjl9"] Jan 27 19:43:11 crc kubenswrapper[4853]: I0127 19:43:11.872955 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8np9p/crc-debug-xnjl9"] Jan 27 19:43:12 crc kubenswrapper[4853]: I0127 19:43:12.951909 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.022759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-host\") pod \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.023246 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv6nk\" (UniqueName: \"kubernetes.io/projected/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-kube-api-access-xv6nk\") pod \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\" (UID: \"b7d60993-8fac-4fe9-ad2d-e45f0596bdb8\") " Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.024233 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-host" (OuterVolumeSpecName: "host") pod "b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" (UID: "b7d60993-8fac-4fe9-ad2d-e45f0596bdb8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.024511 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.043993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-kube-api-access-xv6nk" (OuterVolumeSpecName: "kube-api-access-xv6nk") pod "b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" (UID: "b7d60993-8fac-4fe9-ad2d-e45f0596bdb8"). InnerVolumeSpecName "kube-api-access-xv6nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.126717 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv6nk\" (UniqueName: \"kubernetes.io/projected/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8-kube-api-access-xv6nk\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.856116 4853 scope.go:117] "RemoveContainer" containerID="08c2694c6cc681e2b5027bb44f429d8dca3b7112cf29c76f49bd90c80cabff71" Jan 27 19:43:13 crc kubenswrapper[4853]: I0127 19:43:13.856665 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/crc-debug-xnjl9" Jan 27 19:43:14 crc kubenswrapper[4853]: I0127 19:43:14.129896 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" path="/var/lib/kubelet/pods/b7d60993-8fac-4fe9-ad2d-e45f0596bdb8/volumes" Jan 27 19:43:19 crc kubenswrapper[4853]: I0127 19:43:19.113256 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:43:19 crc kubenswrapper[4853]: E0127 19:43:19.115628 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:43:28 crc kubenswrapper[4853]: I0127 19:43:28.449446 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57cbf989c8-gmwvx_10b90707-26fd-41f4-b020-0458facda8ba/barbican-api/0.log" Jan 27 19:43:28 crc kubenswrapper[4853]: I0127 19:43:28.621967 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57cbf989c8-gmwvx_10b90707-26fd-41f4-b020-0458facda8ba/barbican-api-log/0.log" Jan 27 19:43:28 crc kubenswrapper[4853]: I0127 19:43:28.678535 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-757c6cc6c8-b7v22_d2f2b676-e83e-4107-9cce-525426cd6cbc/barbican-keystone-listener/0.log" Jan 27 19:43:28 crc kubenswrapper[4853]: I0127 19:43:28.716658 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-757c6cc6c8-b7v22_d2f2b676-e83e-4107-9cce-525426cd6cbc/barbican-keystone-listener-log/0.log" Jan 27 19:43:28 crc kubenswrapper[4853]: I0127 19:43:28.871376 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5dd7f58c-gdxtv_d47c94df-0d90-409c-8bd4-2a237d641021/barbican-worker/0.log" Jan 27 19:43:28 crc kubenswrapper[4853]: I0127 19:43:28.912798 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5dd7f58c-gdxtv_d47c94df-0d90-409c-8bd4-2a237d641021/barbican-worker-log/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.114317 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4_7f4e6043-7a79-455d-97be-20aff374a38d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.143533 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/ceilometer-central-agent/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.395848 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/ceilometer-notification-agent/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.465533 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/sg-core/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.554848 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/proxy-httpd/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.676385 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdc98336-c980-4a4c-b453-fb72f6d34185/cinder-api/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.714557 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdc98336-c980-4a4c-b453-fb72f6d34185/cinder-api-log/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.832476 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13916d35-368a-417b-bfea-4f82d71797c3/cinder-scheduler/0.log" Jan 27 19:43:29 crc kubenswrapper[4853]: I0127 19:43:29.925220 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13916d35-368a-417b-bfea-4f82d71797c3/probe/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.054406 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-t84cd_b48e7be3-8341-4d63-bb9e-3b665b27591b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.167747 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz_d3496093-310a-422a-a09c-d796470ad2c0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.324438 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hjhl4_7225b878-e91a-4d57-8f13-19de93bd506d/init/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.500959 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hjhl4_7225b878-e91a-4d57-8f13-19de93bd506d/init/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.538085 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hjhl4_7225b878-e91a-4d57-8f13-19de93bd506d/dnsmasq-dns/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.590050 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-szg9m_e4809563-3f03-4361-9794-87f5705115b8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.808663 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1ea8e822-c78e-4fc2-8afe-09c0ef609d47/glance-httpd/0.log" Jan 27 19:43:30 crc kubenswrapper[4853]: I0127 19:43:30.810689 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1ea8e822-c78e-4fc2-8afe-09c0ef609d47/glance-log/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.017364 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_40f9ab82-cf2e-4b60-bcfc-a41137752ef7/glance-log/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.024787 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_40f9ab82-cf2e-4b60-bcfc-a41137752ef7/glance-httpd/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.267282 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69967664fb-pbqhr_66d621f7-387b-470d-8e42-bebbfada3bbc/horizon/1.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.410382 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69967664fb-pbqhr_66d621f7-387b-470d-8e42-bebbfada3bbc/horizon/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.498934 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b_bbb5fe03-6098-4e03-ab85-5a28e090f13c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.633727 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69967664fb-pbqhr_66d621f7-387b-470d-8e42-bebbfada3bbc/horizon-log/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.706055 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dcnnf_f23ab0fa-bd1a-4494-a7fe-428f0b8ea536/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.965537 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492341-snj9s_7d4283a6-8ac9-4d5d-9a33-c753064f6930/keystone-cron/0.log" Jan 27 19:43:31 crc kubenswrapper[4853]: I0127 19:43:31.999630 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54f5975d7b-jvtmz_b033907b-77e1-47e8-8921-6cb6e40f5f06/keystone-api/0.log" Jan 27 19:43:32 crc kubenswrapper[4853]: I0127 19:43:32.114431 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:43:32 crc kubenswrapper[4853]: E0127 19:43:32.114727 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:43:32 crc kubenswrapper[4853]: I0127 19:43:32.147508 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c6327a68-b665-423b-85ed-3b1a4d3ffaa2/kube-state-metrics/0.log" Jan 27 19:43:32 crc kubenswrapper[4853]: I0127 19:43:32.329656 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9_91e90160-3a76-416b-a3e6-cf5d105f892d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:32 crc kubenswrapper[4853]: I0127 19:43:32.770098 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64c8bd57d9-g88k8_911dc005-42f8-4086-9ee9-04490f7120f4/neutron-httpd/0.log" Jan 27 19:43:32 crc kubenswrapper[4853]: I0127 19:43:32.794681 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64c8bd57d9-g88k8_911dc005-42f8-4086-9ee9-04490f7120f4/neutron-api/0.log" Jan 27 19:43:33 crc kubenswrapper[4853]: I0127 19:43:33.183016 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk_149036fd-39f5-4bd0-a585-f495af3a55d1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:33 crc kubenswrapper[4853]: I0127 19:43:33.736575 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a649598f-69be-4de2-9a79-b5581f1fc8f9/nova-api-log/0.log" Jan 27 19:43:33 crc kubenswrapper[4853]: I0127 19:43:33.743472 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d238c8e7-40ad-4834-8af2-0d942d49852a/nova-cell0-conductor-conductor/0.log" Jan 27 19:43:33 crc kubenswrapper[4853]: I0127 19:43:33.958394 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a649598f-69be-4de2-9a79-b5581f1fc8f9/nova-api-api/0.log" Jan 27 19:43:34 crc kubenswrapper[4853]: I0127 19:43:34.153133 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_eb83c723-2f1b-419a-bd58-51e56534cb23/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 19:43:34 crc kubenswrapper[4853]: I0127 19:43:34.156262 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0f9c3933-7f75-4c32-95e2-bac827abcb76/nova-cell1-conductor-conductor/0.log" Jan 27 19:43:34 crc kubenswrapper[4853]: I0127 19:43:34.346491 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l8ptd_8b54da38-cda9-486f-bb52-e18ebfa81cc8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:34 crc kubenswrapper[4853]: I0127 19:43:34.484024 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_1fb249c2-c72b-4f50-bee6-8d461fc5b613/nova-metadata-log/0.log" Jan 27 19:43:34 crc kubenswrapper[4853]: I0127 19:43:34.857764 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_28787444-e1bd-43c7-a22c-f3ce3678986d/nova-scheduler-scheduler/0.log" Jan 27 19:43:34 crc kubenswrapper[4853]: I0127 19:43:34.871925 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbf533bd-2499-4724-b558-cf94c7017f3d/mysql-bootstrap/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.070899 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbf533bd-2499-4724-b558-cf94c7017f3d/mysql-bootstrap/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.124859 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbf533bd-2499-4724-b558-cf94c7017f3d/galera/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.284386 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ccbab76c-f034-4f3b-9dfe-fcaf98d45d87/mysql-bootstrap/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.536647 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ccbab76c-f034-4f3b-9dfe-fcaf98d45d87/mysql-bootstrap/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.572358 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ccbab76c-f034-4f3b-9dfe-fcaf98d45d87/galera/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.758854 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_57e7a062-e8a4-457a-909c-7f7922327a1e/openstackclient/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.781311 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_1fb249c2-c72b-4f50-bee6-8d461fc5b613/nova-metadata-metadata/0.log" Jan 27 19:43:35 crc kubenswrapper[4853]: I0127 19:43:35.829597 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-99cll_98d689c7-0b2e-46b3-95f7-5c43aafac340/openstack-network-exporter/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.066991 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovsdb-server-init/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.306595 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovs-vswitchd/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.328582 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovsdb-server-init/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.353745 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovsdb-server/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.596940 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xkd2q_4d52eb59-75a5-4074-8bfb-c9dab8b0c97f/ovn-controller/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.652158 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-49xvk_a8e00930-5920-4f3f-9f05-62da3fdcdd88/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.801891 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_24f3c135-8664-4bbd-87bf-dd93c3595195/ovn-northd/0.log" Jan 27 19:43:36 crc kubenswrapper[4853]: I0127 19:43:36.831962 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_24f3c135-8664-4bbd-87bf-dd93c3595195/openstack-network-exporter/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.212800 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c1d29cf4-2fdf-46ef-8470-e42a8226dd7c/openstack-network-exporter/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.261600 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c1d29cf4-2fdf-46ef-8470-e42a8226dd7c/ovsdbserver-nb/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.397439 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_368a8f46-825c-43ad-803b-c7fdf6ca048c/openstack-network-exporter/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.457884 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_368a8f46-825c-43ad-803b-c7fdf6ca048c/ovsdbserver-sb/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.701290 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f5866f968-d652z_81ce3654-e156-4fa9-9399-3824ff16a228/placement-log/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.717199 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f5866f968-d652z_81ce3654-e156-4fa9-9399-3824ff16a228/placement-api/0.log" Jan 27 19:43:37 crc kubenswrapper[4853]: I0127 19:43:37.816139 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b6e38e4d-fbc2-4702-9767-e0376655776a/setup-container/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.001505 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b6e38e4d-fbc2-4702-9767-e0376655776a/setup-container/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.034896 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b6e38e4d-fbc2-4702-9767-e0376655776a/rabbitmq/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.084983 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e1ba655b-12d8-4f9d-882f-1d7faeb1f65f/setup-container/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.320707 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e1ba655b-12d8-4f9d-882f-1d7faeb1f65f/setup-container/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.353497 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e1ba655b-12d8-4f9d-882f-1d7faeb1f65f/rabbitmq/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.379674 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk_57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.669650 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9hbp9_c936a14f-519a-4f53-a09b-f7cb85bcdd6b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.723387 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg_327b1d19-709e-4efa-b5b3-11513e5dbdac/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.972092 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-p45vd_eb152926-dd69-4634-9220-0074823b049b/ssh-known-hosts-edpm-deployment/0.log" Jan 27 19:43:38 crc kubenswrapper[4853]: I0127 19:43:38.995716 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qq4jx_42b00b77-5a5c-4880-a0f0-2556bab179fd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.296768 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dff6d999f-xr8nv_c029593d-ff63-4033-8bc5-39cf7e0457bd/proxy-server/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.379540 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dff6d999f-xr8nv_c029593d-ff63-4033-8bc5-39cf7e0457bd/proxy-httpd/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.380887 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pfpph_119564cc-719b-4691-91d5-672513ed9acf/swift-ring-rebalance/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.527523 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-auditor/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.571913 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-reaper/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.675442 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-replicator/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.794075 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-replicator/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.801149 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-auditor/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.814179 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-server/0.log" Jan 27 19:43:39 crc kubenswrapper[4853]: I0127 19:43:39.929184 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-server/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.025908 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-updater/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.069601 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-expirer/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.128278 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-auditor/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.208726 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-replicator/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.329574 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-updater/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.339313 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-server/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.343991 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/rsync/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.498882 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/swift-recon-cron/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.722605 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2v797_7f436e8d-9923-47a6-ab8c-ee0c8e3bde82/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:40 crc kubenswrapper[4853]: I0127 19:43:40.797955 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6275c0bd-3255-4c3d-88bc-30f5d1ee27ca/tempest-tests-tempest-tests-runner/0.log" Jan 27 19:43:41 crc kubenswrapper[4853]: I0127 19:43:41.113633 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_34624963-57cc-4683-b919-e1b2e1183b0a/test-operator-logs-container/0.log" Jan 27 19:43:41 crc kubenswrapper[4853]: I0127 19:43:41.274930 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vwklk_51db77a7-69eb-4145-b87c-abfbb514f2c7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:43:46 crc kubenswrapper[4853]: I0127 19:43:46.112849 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:43:46 crc kubenswrapper[4853]: E0127 19:43:46.113907 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:43:50 crc kubenswrapper[4853]: I0127 19:43:50.590400 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_94965b7d-5efe-4ef3-aadf-41a550c47752/memcached/0.log" Jan 27 19:43:59 crc kubenswrapper[4853]: I0127 19:43:59.113321 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:43:59 crc kubenswrapper[4853]: E0127 19:43:59.114511 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.032964 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/util/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.273231 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/pull/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.287894 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/util/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.317101 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/pull/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.450527 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/pull/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.459308 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/util/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.517454 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/extract/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.769135 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-jh7mx_f6e35929-3b14-49b4-9e0e-bbebc88c2ce2/manager/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.801937 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-sj29r_5db9a86f-dff3-4c54-a478-79ce384d78f7/manager/0.log" Jan 27 19:44:08 crc kubenswrapper[4853]: I0127 19:44:08.941088 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-mn6nj_0ee7eba6-8efe-4de9-bb26-69c5b47d0312/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.128969 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-n89p8_7b18ea7d-8f47-450b-aa4b-0b75fc0c0581/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.130377 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-bx595_89a3cd80-89b0-41f9-a469-ef001d9be747/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.310434 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-4pmv9_01b08d09-41bb-4a7a-9af2-7fe597572169/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.620104 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-dj2lw_e279285c-c536-46b4-b133-7c23811a725a/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.622023 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-qrs25_613d8e60-1314-45a2-8bcc-250151f708d1/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.856744 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-dbzp2_0a29796d-a7c3-480a-8379-4d4e7731d5b3/manager/0.log" Jan 27 19:44:09 crc kubenswrapper[4853]: I0127 19:44:09.886977 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-flf9f_bee4ca26-dd1a-4747-8bf3-f152d8236270/manager/0.log" Jan 27 19:44:10 crc kubenswrapper[4853]: I0127 19:44:10.077550 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn_a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43/manager/0.log" Jan 27 19:44:10 crc kubenswrapper[4853]: I0127 19:44:10.159891 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-gvm5r_ace486ae-a8c2-4aca-8719-528ecbed879f/manager/0.log" Jan 27 19:44:10 crc kubenswrapper[4853]: I0127 19:44:10.387217 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-mrb2s_d9757c33-a50c-4fa4-ab8d-270c2bed1459/manager/0.log" Jan 27 19:44:10 crc kubenswrapper[4853]: I0127 19:44:10.477858 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-cq4p4_85e5832e-902f-4f65-b659-60abf5d14654/manager/0.log" Jan 27 19:44:10 crc kubenswrapper[4853]: I0127 19:44:10.597395 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx_9bd5a06a-f084-42ba-8f88-9be1cee0554a/manager/0.log" Jan 27 19:44:10 crc kubenswrapper[4853]: I0127 19:44:10.835736 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-67d88b5675-p6llj_fd2257c2-1b25-4d5f-8953-19f01df9c309/operator/0.log" Jan 27 19:44:11 crc kubenswrapper[4853]: I0127 19:44:11.020475 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nc7fr_7f5aa97a-2a3f-4a6d-8e75-521db38570d9/registry-server/0.log" Jan 27 19:44:11 crc kubenswrapper[4853]: I0127 19:44:11.113871 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:44:11 crc kubenswrapper[4853]: E0127 19:44:11.114189 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:44:11 crc kubenswrapper[4853]: I0127 19:44:11.286846 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-qgstv_f593e788-ce4a-47ad-a08c-96e1ec0cc92c/manager/0.log" Jan 27 19:44:11 crc kubenswrapper[4853]: I0127 19:44:11.493660 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-gl44q_7d1a71be-07cb-43e0-8584-75e5c48f4175/manager/0.log" Jan 27 19:44:11 crc kubenswrapper[4853]: I0127 19:44:11.648545 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2ls6m_98c9ef8d-ccf0-4c4e-83f3-53451532f0ad/operator/0.log" Jan 27 19:44:11 crc kubenswrapper[4853]: I0127 19:44:11.944597 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bn7wr_5b33f408-e905-4298-adfc-b113f89ecd36/manager/0.log" Jan 27 19:44:12 crc kubenswrapper[4853]: I0127 19:44:12.112242 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bf776578d-kb6wk_fede2ab9-a2b5-45f5-bac7-daa8d576d23f/manager/0.log" Jan 27 19:44:12 crc kubenswrapper[4853]: I0127 19:44:12.158646 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-mzmbv_e32b4f39-5c23-4e91-92bc-ffd6b7694a5a/manager/0.log" Jan 27 19:44:12 crc kubenswrapper[4853]: I0127 19:44:12.225202 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-cpsgc_8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5/manager/0.log" Jan 27 19:44:12 crc kubenswrapper[4853]: I0127 19:44:12.384690 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-d4dcp_aacb2032-25f3-4faf-a0ca-f980411b4ae2/manager/0.log" Jan 27 19:44:25 crc kubenswrapper[4853]: I0127 19:44:25.112686 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:44:25 crc kubenswrapper[4853]: E0127 19:44:25.113423 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:44:32 crc kubenswrapper[4853]: I0127 19:44:32.152601 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t4wdl_5244d6c6-721d-44cf-8175-48408b3780b0/control-plane-machine-set-operator/0.log" Jan 27 19:44:32 crc kubenswrapper[4853]: I0127 19:44:32.371000 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kmkjx_bb1a45ce-530f-4492-a7e2-9432e194001d/machine-api-operator/0.log" Jan 27 19:44:32 crc kubenswrapper[4853]: I0127 19:44:32.386495 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kmkjx_bb1a45ce-530f-4492-a7e2-9432e194001d/kube-rbac-proxy/0.log" Jan 27 19:44:39 crc kubenswrapper[4853]: I0127 19:44:39.113076 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:44:39 crc kubenswrapper[4853]: E0127 19:44:39.113807 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:44:45 crc kubenswrapper[4853]: I0127 19:44:45.092309 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ltznx_ecbb7636-0b7d-4212-99ce-b28e191b5dde/cert-manager-controller/0.log" Jan 27 19:44:45 crc kubenswrapper[4853]: I0127 19:44:45.332596 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-85ml7_26226a5a-7c8e-4247-8441-43c981f5d894/cert-manager-cainjector/0.log" Jan 27 19:44:45 crc kubenswrapper[4853]: I0127 19:44:45.342296 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-b7s9n_36ad0b1f-b18e-48b1-84f2-bfe1343b1257/cert-manager-webhook/0.log" Jan 27 19:44:52 crc kubenswrapper[4853]: I0127 19:44:52.116906 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:44:52 crc kubenswrapper[4853]: E0127 19:44:52.119485 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:44:58 crc kubenswrapper[4853]: I0127 19:44:58.848241 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-bnd9d_fdac6187-5b6f-4375-a09a-42efb7d0eaf6/nmstate-console-plugin/0.log" Jan 27 19:44:59 crc kubenswrapper[4853]: I0127 19:44:59.052635 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-t77h4_9f0d7951-c2e9-4857-a367-2426f842e3af/nmstate-handler/0.log" Jan 27 19:44:59 crc kubenswrapper[4853]: I0127 19:44:59.194523 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dl7m9_b5d820ba-3b41-444c-b92b-1754909e56a0/kube-rbac-proxy/0.log" Jan 27 19:44:59 crc kubenswrapper[4853]: I0127 19:44:59.271513 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dl7m9_b5d820ba-3b41-444c-b92b-1754909e56a0/nmstate-metrics/0.log" Jan 27 19:44:59 crc kubenswrapper[4853]: I0127 19:44:59.370914 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-mj8nh_a903dd65-5d9d-48da-b24d-d9ae9ad3a734/nmstate-operator/0.log" Jan 27 19:44:59 crc kubenswrapper[4853]: I0127 19:44:59.480007 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-znknt_9fd55339-43c5-45d5-9789-2f69da655baf/nmstate-webhook/0.log" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.169989 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn"] Jan 27 19:45:00 crc kubenswrapper[4853]: E0127 19:45:00.170527 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" containerName="container-00" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.170545 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" containerName="container-00" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.170770 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d60993-8fac-4fe9-ad2d-e45f0596bdb8" containerName="container-00" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.171642 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.174039 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.175429 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.280216 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn"] Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.346449 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d54397ec-0e9e-4c49-8024-e721a565dc11-secret-volume\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.346514 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54397ec-0e9e-4c49-8024-e721a565dc11-config-volume\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.346570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs85l\" (UniqueName: \"kubernetes.io/projected/d54397ec-0e9e-4c49-8024-e721a565dc11-kube-api-access-gs85l\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.450328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d54397ec-0e9e-4c49-8024-e721a565dc11-secret-volume\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.450795 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54397ec-0e9e-4c49-8024-e721a565dc11-config-volume\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.450905 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs85l\" (UniqueName: \"kubernetes.io/projected/d54397ec-0e9e-4c49-8024-e721a565dc11-kube-api-access-gs85l\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.452298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54397ec-0e9e-4c49-8024-e721a565dc11-config-volume\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.458210 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d54397ec-0e9e-4c49-8024-e721a565dc11-secret-volume\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.472364 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs85l\" (UniqueName: \"kubernetes.io/projected/d54397ec-0e9e-4c49-8024-e721a565dc11-kube-api-access-gs85l\") pod \"collect-profiles-29492385-bzwqn\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:00 crc kubenswrapper[4853]: I0127 19:45:00.531863 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:01 crc kubenswrapper[4853]: I0127 19:45:01.137915 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn"] Jan 27 19:45:01 crc kubenswrapper[4853]: I0127 19:45:01.838713 4853 generic.go:334] "Generic (PLEG): container finished" podID="d54397ec-0e9e-4c49-8024-e721a565dc11" containerID="cffd79bbe94c599a1edb668ac2610b7f71fa76fad50a65d029eb164a4df3770d" exitCode=0 Jan 27 19:45:01 crc kubenswrapper[4853]: I0127 19:45:01.838800 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" event={"ID":"d54397ec-0e9e-4c49-8024-e721a565dc11","Type":"ContainerDied","Data":"cffd79bbe94c599a1edb668ac2610b7f71fa76fad50a65d029eb164a4df3770d"} Jan 27 19:45:01 crc kubenswrapper[4853]: I0127 19:45:01.838992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" event={"ID":"d54397ec-0e9e-4c49-8024-e721a565dc11","Type":"ContainerStarted","Data":"c7466547e989df810cee46f7a71e4b6d548c8c21763435bdc293c2bd58b0fd68"} Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.243580 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.424460 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54397ec-0e9e-4c49-8024-e721a565dc11-config-volume\") pod \"d54397ec-0e9e-4c49-8024-e721a565dc11\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.424777 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d54397ec-0e9e-4c49-8024-e721a565dc11-secret-volume\") pod \"d54397ec-0e9e-4c49-8024-e721a565dc11\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.424855 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs85l\" (UniqueName: \"kubernetes.io/projected/d54397ec-0e9e-4c49-8024-e721a565dc11-kube-api-access-gs85l\") pod \"d54397ec-0e9e-4c49-8024-e721a565dc11\" (UID: \"d54397ec-0e9e-4c49-8024-e721a565dc11\") " Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.425276 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54397ec-0e9e-4c49-8024-e721a565dc11-config-volume" (OuterVolumeSpecName: "config-volume") pod "d54397ec-0e9e-4c49-8024-e721a565dc11" (UID: "d54397ec-0e9e-4c49-8024-e721a565dc11"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.443985 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54397ec-0e9e-4c49-8024-e721a565dc11-kube-api-access-gs85l" (OuterVolumeSpecName: "kube-api-access-gs85l") pod "d54397ec-0e9e-4c49-8024-e721a565dc11" (UID: "d54397ec-0e9e-4c49-8024-e721a565dc11"). InnerVolumeSpecName "kube-api-access-gs85l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.444156 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54397ec-0e9e-4c49-8024-e721a565dc11-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d54397ec-0e9e-4c49-8024-e721a565dc11" (UID: "d54397ec-0e9e-4c49-8024-e721a565dc11"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.527768 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs85l\" (UniqueName: \"kubernetes.io/projected/d54397ec-0e9e-4c49-8024-e721a565dc11-kube-api-access-gs85l\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.527827 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54397ec-0e9e-4c49-8024-e721a565dc11-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.527842 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d54397ec-0e9e-4c49-8024-e721a565dc11-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.865106 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" event={"ID":"d54397ec-0e9e-4c49-8024-e721a565dc11","Type":"ContainerDied","Data":"c7466547e989df810cee46f7a71e4b6d548c8c21763435bdc293c2bd58b0fd68"} Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.865167 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7466547e989df810cee46f7a71e4b6d548c8c21763435bdc293c2bd58b0fd68" Jan 27 19:45:03 crc kubenswrapper[4853]: I0127 19:45:03.865236 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-bzwqn" Jan 27 19:45:04 crc kubenswrapper[4853]: I0127 19:45:04.328751 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99"] Jan 27 19:45:04 crc kubenswrapper[4853]: I0127 19:45:04.337725 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-bdc99"] Jan 27 19:45:06 crc kubenswrapper[4853]: I0127 19:45:06.113503 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:45:06 crc kubenswrapper[4853]: E0127 19:45:06.114034 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:45:06 crc kubenswrapper[4853]: I0127 19:45:06.124057 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75d2295-47d6-44cb-b492-f2f84fcb7964" path="/var/lib/kubelet/pods/b75d2295-47d6-44cb-b492-f2f84fcb7964/volumes" Jan 27 19:45:18 crc kubenswrapper[4853]: I0127 19:45:18.119360 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:45:18 crc kubenswrapper[4853]: E0127 19:45:18.120317 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:45:27 crc kubenswrapper[4853]: I0127 19:45:27.899513 4853 scope.go:117] "RemoveContainer" containerID="131a1e60eadf9f011dfb1dde545bd245105f1afecfb6695769a430c4df77d3c2" Jan 27 19:45:30 crc kubenswrapper[4853]: I0127 19:45:30.868337 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkdp4_471ac2ca-b99c-449c-b910-80b44e9a7941/kube-rbac-proxy/0.log" Jan 27 19:45:30 crc kubenswrapper[4853]: I0127 19:45:30.987617 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkdp4_471ac2ca-b99c-449c-b910-80b44e9a7941/controller/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.170982 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.305734 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.337837 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.353769 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.420676 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.632198 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.652342 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.697919 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.700175 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.887541 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.902212 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.913815 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/controller/0.log" Jan 27 19:45:31 crc kubenswrapper[4853]: I0127 19:45:31.922637 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.113440 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:45:32 crc kubenswrapper[4853]: E0127 19:45:32.114754 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.114953 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/frr-metrics/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.158462 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/kube-rbac-proxy/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.237984 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/kube-rbac-proxy-frr/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.384409 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/reloader/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.474724 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-srh2s_5d610e65-a0f1-4304-a7f9-f8b49e86d372/frr-k8s-webhook-server/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.735492 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d46b5cf6-rcn4b_729dbe0f-d26d-4eeb-b813-e4be40033e44/manager/0.log" Jan 27 19:45:32 crc kubenswrapper[4853]: I0127 19:45:32.964880 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-559d6879b9-6w56b_8a3f66ba-be42-476c-b03b-6ba6c92acd0f/webhook-server/0.log" Jan 27 19:45:33 crc kubenswrapper[4853]: I0127 19:45:33.043246 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2pvs_67e6561c-3f3b-45dd-b166-ca67a1abd96b/kube-rbac-proxy/0.log" Jan 27 19:45:33 crc kubenswrapper[4853]: I0127 19:45:33.656893 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2pvs_67e6561c-3f3b-45dd-b166-ca67a1abd96b/speaker/0.log" Jan 27 19:45:33 crc kubenswrapper[4853]: I0127 19:45:33.721102 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/frr/0.log" Jan 27 19:45:45 crc kubenswrapper[4853]: I0127 19:45:45.112801 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:45:45 crc kubenswrapper[4853]: E0127 19:45:45.114553 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:45:46 crc kubenswrapper[4853]: I0127 19:45:46.682378 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/util/0.log" Jan 27 19:45:46 crc kubenswrapper[4853]: I0127 19:45:46.937862 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/pull/0.log" Jan 27 19:45:46 crc kubenswrapper[4853]: I0127 19:45:46.983891 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/util/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.004226 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/pull/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.118732 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/util/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.158080 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/pull/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.173894 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/extract/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.315153 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/util/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.485479 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/util/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.488861 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/pull/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.526746 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/pull/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.654854 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/util/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.706166 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/pull/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.723917 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/extract/0.log" Jan 27 19:45:47 crc kubenswrapper[4853]: I0127 19:45:47.846325 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-utilities/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.049538 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-utilities/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.056833 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-content/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.061216 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-content/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.264270 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-content/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.314159 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-utilities/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.491163 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-utilities/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.690774 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-utilities/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.760903 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-content/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.778844 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-content/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.869885 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/registry-server/0.log" Jan 27 19:45:48 crc kubenswrapper[4853]: I0127 19:45:48.994928 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-content/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.001902 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-utilities/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.156019 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/registry-server/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.195580 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-x2rhc_beef4152-90a1-4027-8971-dd9dbdd93fb3/marketplace-operator/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.334522 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-utilities/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.500998 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-content/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.526447 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-content/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.528963 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-utilities/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.682249 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-utilities/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.767982 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-content/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.932892 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-utilities/0.log" Jan 27 19:45:49 crc kubenswrapper[4853]: I0127 19:45:49.977269 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/registry-server/0.log" Jan 27 19:45:50 crc kubenswrapper[4853]: I0127 19:45:50.124714 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-utilities/0.log" Jan 27 19:45:50 crc kubenswrapper[4853]: I0127 19:45:50.145871 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-content/0.log" Jan 27 19:45:50 crc kubenswrapper[4853]: I0127 19:45:50.180148 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-content/0.log" Jan 27 19:45:50 crc kubenswrapper[4853]: I0127 19:45:50.325333 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-content/0.log" Jan 27 19:45:50 crc kubenswrapper[4853]: I0127 19:45:50.340732 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-utilities/0.log" Jan 27 19:45:51 crc kubenswrapper[4853]: I0127 19:45:51.127338 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/registry-server/0.log" Jan 27 19:46:00 crc kubenswrapper[4853]: I0127 19:46:00.113083 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:46:00 crc kubenswrapper[4853]: E0127 19:46:00.114066 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.669949 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7mmn"] Jan 27 19:46:11 crc kubenswrapper[4853]: E0127 19:46:11.671151 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54397ec-0e9e-4c49-8024-e721a565dc11" containerName="collect-profiles" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.671167 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54397ec-0e9e-4c49-8024-e721a565dc11" containerName="collect-profiles" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.671385 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54397ec-0e9e-4c49-8024-e721a565dc11" containerName="collect-profiles" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.672871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.697484 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7mmn"] Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.756194 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-catalog-content\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.756300 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-utilities\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.756352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lww6\" (UniqueName: \"kubernetes.io/projected/7d11d1ed-f8ae-4631-8d86-9203effc38b8-kube-api-access-2lww6\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.858653 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lww6\" (UniqueName: \"kubernetes.io/projected/7d11d1ed-f8ae-4631-8d86-9203effc38b8-kube-api-access-2lww6\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.859358 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-catalog-content\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.859439 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-utilities\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.860168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-utilities\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.860363 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-catalog-content\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:11 crc kubenswrapper[4853]: I0127 19:46:11.895568 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lww6\" (UniqueName: \"kubernetes.io/projected/7d11d1ed-f8ae-4631-8d86-9203effc38b8-kube-api-access-2lww6\") pod \"redhat-operators-l7mmn\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:12 crc kubenswrapper[4853]: I0127 19:46:12.003729 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:12 crc kubenswrapper[4853]: I0127 19:46:12.367278 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7mmn"] Jan 27 19:46:12 crc kubenswrapper[4853]: I0127 19:46:12.505885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerStarted","Data":"221dbf2f4b4b72b98260e7a84a0686b4e9a0ab91784b5dd665f35dbfef5ecab6"} Jan 27 19:46:13 crc kubenswrapper[4853]: I0127 19:46:13.517448 4853 generic.go:334] "Generic (PLEG): container finished" podID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerID="734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b" exitCode=0 Jan 27 19:46:13 crc kubenswrapper[4853]: I0127 19:46:13.517957 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerDied","Data":"734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b"} Jan 27 19:46:13 crc kubenswrapper[4853]: I0127 19:46:13.520014 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:46:14 crc kubenswrapper[4853]: I0127 19:46:14.113614 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:46:14 crc kubenswrapper[4853]: E0127 19:46:14.113961 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:46:14 crc kubenswrapper[4853]: I0127 19:46:14.529956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerStarted","Data":"80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b"} Jan 27 19:46:16 crc kubenswrapper[4853]: I0127 19:46:16.554747 4853 generic.go:334] "Generic (PLEG): container finished" podID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerID="80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b" exitCode=0 Jan 27 19:46:16 crc kubenswrapper[4853]: I0127 19:46:16.556006 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerDied","Data":"80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b"} Jan 27 19:46:18 crc kubenswrapper[4853]: I0127 19:46:18.579840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerStarted","Data":"be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27"} Jan 27 19:46:18 crc kubenswrapper[4853]: I0127 19:46:18.603063 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7mmn" podStartSLOduration=3.391306671 podStartE2EDuration="7.603035315s" podCreationTimestamp="2026-01-27 19:46:11 +0000 UTC" firstStartedPulling="2026-01-27 19:46:13.519806214 +0000 UTC m=+3815.982349097" lastFinishedPulling="2026-01-27 19:46:17.731534858 +0000 UTC m=+3820.194077741" observedRunningTime="2026-01-27 19:46:18.599032921 +0000 UTC m=+3821.061575804" watchObservedRunningTime="2026-01-27 19:46:18.603035315 +0000 UTC m=+3821.065578198" Jan 27 19:46:22 crc kubenswrapper[4853]: I0127 19:46:22.004279 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:22 crc kubenswrapper[4853]: I0127 19:46:22.004832 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:23 crc kubenswrapper[4853]: I0127 19:46:23.064640 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7mmn" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:23 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:23 crc kubenswrapper[4853]: > Jan 27 19:46:25 crc kubenswrapper[4853]: I0127 19:46:25.113595 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:46:25 crc kubenswrapper[4853]: E0127 19:46:25.114243 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:46:33 crc kubenswrapper[4853]: I0127 19:46:33.076806 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7mmn" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="registry-server" probeResult="failure" output=< Jan 27 19:46:33 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 19:46:33 crc kubenswrapper[4853]: > Jan 27 19:46:37 crc kubenswrapper[4853]: I0127 19:46:37.114491 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:46:37 crc kubenswrapper[4853]: E0127 19:46:37.115586 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:46:42 crc kubenswrapper[4853]: I0127 19:46:42.062241 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:42 crc kubenswrapper[4853]: I0127 19:46:42.125820 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:42 crc kubenswrapper[4853]: I0127 19:46:42.874155 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7mmn"] Jan 27 19:46:43 crc kubenswrapper[4853]: I0127 19:46:43.818874 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7mmn" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="registry-server" containerID="cri-o://be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27" gracePeriod=2 Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.290205 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.477883 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-utilities\") pod \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.478264 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-catalog-content\") pod \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.478452 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lww6\" (UniqueName: \"kubernetes.io/projected/7d11d1ed-f8ae-4631-8d86-9203effc38b8-kube-api-access-2lww6\") pod \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\" (UID: \"7d11d1ed-f8ae-4631-8d86-9203effc38b8\") " Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.482814 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-utilities" (OuterVolumeSpecName: "utilities") pod "7d11d1ed-f8ae-4631-8d86-9203effc38b8" (UID: "7d11d1ed-f8ae-4631-8d86-9203effc38b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.484706 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d11d1ed-f8ae-4631-8d86-9203effc38b8-kube-api-access-2lww6" (OuterVolumeSpecName: "kube-api-access-2lww6") pod "7d11d1ed-f8ae-4631-8d86-9203effc38b8" (UID: "7d11d1ed-f8ae-4631-8d86-9203effc38b8"). InnerVolumeSpecName "kube-api-access-2lww6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.580791 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.580849 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lww6\" (UniqueName: \"kubernetes.io/projected/7d11d1ed-f8ae-4631-8d86-9203effc38b8-kube-api-access-2lww6\") on node \"crc\" DevicePath \"\"" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.611497 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d11d1ed-f8ae-4631-8d86-9203effc38b8" (UID: "7d11d1ed-f8ae-4631-8d86-9203effc38b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.682806 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d11d1ed-f8ae-4631-8d86-9203effc38b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.830181 4853 generic.go:334] "Generic (PLEG): container finished" podID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerID="be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27" exitCode=0 Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.830246 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7mmn" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.830270 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerDied","Data":"be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27"} Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.831662 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7mmn" event={"ID":"7d11d1ed-f8ae-4631-8d86-9203effc38b8","Type":"ContainerDied","Data":"221dbf2f4b4b72b98260e7a84a0686b4e9a0ab91784b5dd665f35dbfef5ecab6"} Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.831689 4853 scope.go:117] "RemoveContainer" containerID="be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.877877 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7mmn"] Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.887780 4853 scope.go:117] "RemoveContainer" containerID="80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.891695 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7mmn"] Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.910702 4853 scope.go:117] "RemoveContainer" containerID="734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.955098 4853 scope.go:117] "RemoveContainer" containerID="be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27" Jan 27 19:46:44 crc kubenswrapper[4853]: E0127 19:46:44.955921 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27\": container with ID starting with be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27 not found: ID does not exist" containerID="be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.955965 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27"} err="failed to get container status \"be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27\": rpc error: code = NotFound desc = could not find container \"be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27\": container with ID starting with be5d3e03188f0d7ce0d892c1ec75e619bb322f8bc5590da985481c55fa727b27 not found: ID does not exist" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.955998 4853 scope.go:117] "RemoveContainer" containerID="80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b" Jan 27 19:46:44 crc kubenswrapper[4853]: E0127 19:46:44.956519 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b\": container with ID starting with 80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b not found: ID does not exist" containerID="80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.956552 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b"} err="failed to get container status \"80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b\": rpc error: code = NotFound desc = could not find container \"80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b\": container with ID starting with 80272d698c145da8d0beb65e6b4bf5acb62af2e1b7a75f391b235aedcf23551b not found: ID does not exist" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.956578 4853 scope.go:117] "RemoveContainer" containerID="734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b" Jan 27 19:46:44 crc kubenswrapper[4853]: E0127 19:46:44.956775 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b\": container with ID starting with 734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b not found: ID does not exist" containerID="734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b" Jan 27 19:46:44 crc kubenswrapper[4853]: I0127 19:46:44.956809 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b"} err="failed to get container status \"734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b\": rpc error: code = NotFound desc = could not find container \"734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b\": container with ID starting with 734b52d8bc63e77a0d4b74533e27de2e662c0bac9e26a081dc574f687a4b4f2b not found: ID does not exist" Jan 27 19:46:46 crc kubenswrapper[4853]: I0127 19:46:46.129435 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" path="/var/lib/kubelet/pods/7d11d1ed-f8ae-4631-8d86-9203effc38b8/volumes" Jan 27 19:46:51 crc kubenswrapper[4853]: I0127 19:46:51.112472 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:46:51 crc kubenswrapper[4853]: E0127 19:46:51.113316 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:47:06 crc kubenswrapper[4853]: I0127 19:47:06.112574 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:47:06 crc kubenswrapper[4853]: E0127 19:47:06.113465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:47:19 crc kubenswrapper[4853]: I0127 19:47:19.112627 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:47:19 crc kubenswrapper[4853]: E0127 19:47:19.113421 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:47:31 crc kubenswrapper[4853]: I0127 19:47:31.113757 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:47:31 crc kubenswrapper[4853]: E0127 19:47:31.114499 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:47:45 crc kubenswrapper[4853]: I0127 19:47:45.431703 4853 generic.go:334] "Generic (PLEG): container finished" podID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerID="f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368" exitCode=0 Jan 27 19:47:45 crc kubenswrapper[4853]: I0127 19:47:45.431796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8np9p/must-gather-n65kf" event={"ID":"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65","Type":"ContainerDied","Data":"f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368"} Jan 27 19:47:45 crc kubenswrapper[4853]: I0127 19:47:45.433056 4853 scope.go:117] "RemoveContainer" containerID="f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368" Jan 27 19:47:45 crc kubenswrapper[4853]: I0127 19:47:45.797798 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8np9p_must-gather-n65kf_5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65/gather/0.log" Jan 27 19:47:46 crc kubenswrapper[4853]: I0127 19:47:46.118055 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:47:46 crc kubenswrapper[4853]: E0127 19:47:46.120014 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:47:53 crc kubenswrapper[4853]: I0127 19:47:53.441086 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8np9p/must-gather-n65kf"] Jan 27 19:47:53 crc kubenswrapper[4853]: I0127 19:47:53.442384 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8np9p/must-gather-n65kf" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="copy" containerID="cri-o://e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92" gracePeriod=2 Jan 27 19:47:53 crc kubenswrapper[4853]: I0127 19:47:53.460235 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8np9p/must-gather-n65kf"] Jan 27 19:47:53 crc kubenswrapper[4853]: I0127 19:47:53.994254 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8np9p_must-gather-n65kf_5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65/copy/0.log" Jan 27 19:47:53 crc kubenswrapper[4853]: I0127 19:47:53.995353 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.142202 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd85l\" (UniqueName: \"kubernetes.io/projected/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-kube-api-access-rd85l\") pod \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.142356 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-must-gather-output\") pod \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\" (UID: \"5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65\") " Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.151408 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-kube-api-access-rd85l" (OuterVolumeSpecName: "kube-api-access-rd85l") pod "5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" (UID: "5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65"). InnerVolumeSpecName "kube-api-access-rd85l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.246146 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd85l\" (UniqueName: \"kubernetes.io/projected/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-kube-api-access-rd85l\") on node \"crc\" DevicePath \"\"" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.299710 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" (UID: "5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.349223 4853 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.545015 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8np9p_must-gather-n65kf_5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65/copy/0.log" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.545520 4853 generic.go:334] "Generic (PLEG): container finished" podID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerID="e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92" exitCode=143 Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.545579 4853 scope.go:117] "RemoveContainer" containerID="e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.545732 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8np9p/must-gather-n65kf" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.569484 4853 scope.go:117] "RemoveContainer" containerID="f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.624203 4853 scope.go:117] "RemoveContainer" containerID="e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92" Jan 27 19:47:54 crc kubenswrapper[4853]: E0127 19:47:54.624730 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92\": container with ID starting with e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92 not found: ID does not exist" containerID="e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.624797 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92"} err="failed to get container status \"e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92\": rpc error: code = NotFound desc = could not find container \"e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92\": container with ID starting with e1c1808a19a33840b968aa39174c2059a619c10cb8b13cc156e7d71342c9aa92 not found: ID does not exist" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.624841 4853 scope.go:117] "RemoveContainer" containerID="f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368" Jan 27 19:47:54 crc kubenswrapper[4853]: E0127 19:47:54.625909 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368\": container with ID starting with f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368 not found: ID does not exist" containerID="f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368" Jan 27 19:47:54 crc kubenswrapper[4853]: I0127 19:47:54.625965 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368"} err="failed to get container status \"f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368\": rpc error: code = NotFound desc = could not find container \"f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368\": container with ID starting with f99ec3611584c755c43570f3ff564c69095a42dd1fdb5ea40ec28d7589da8368 not found: ID does not exist" Jan 27 19:47:56 crc kubenswrapper[4853]: I0127 19:47:56.127291 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" path="/var/lib/kubelet/pods/5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65/volumes" Jan 27 19:47:57 crc kubenswrapper[4853]: I0127 19:47:57.113206 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:47:57 crc kubenswrapper[4853]: E0127 19:47:57.113604 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:48:10 crc kubenswrapper[4853]: I0127 19:48:10.112752 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:48:10 crc kubenswrapper[4853]: I0127 19:48:10.690510 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"419f0caa718cb6b8323adeb425867a293b3b008799642768d85dc8419e85b6b1"} Jan 27 19:48:28 crc kubenswrapper[4853]: I0127 19:48:28.007427 4853 scope.go:117] "RemoveContainer" containerID="09ebc4cb775b3ade8677252e3ce83cb5d2669ee20cac01188bfb7c936f6f601b" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.578989 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dxbn4"] Jan 27 19:48:49 crc kubenswrapper[4853]: E0127 19:48:49.580194 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="extract-utilities" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580211 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="extract-utilities" Jan 27 19:48:49 crc kubenswrapper[4853]: E0127 19:48:49.580229 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="registry-server" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580235 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="registry-server" Jan 27 19:48:49 crc kubenswrapper[4853]: E0127 19:48:49.580256 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="gather" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580262 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="gather" Jan 27 19:48:49 crc kubenswrapper[4853]: E0127 19:48:49.580278 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="extract-content" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580284 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="extract-content" Jan 27 19:48:49 crc kubenswrapper[4853]: E0127 19:48:49.580302 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="copy" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580308 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="copy" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580548 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="gather" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580570 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d11d1ed-f8ae-4631-8d86-9203effc38b8" containerName="registry-server" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.580585 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5c9e2b-9e00-4a31-8ef7-7f36927b3d65" containerName="copy" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.582445 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.589469 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxbn4"] Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.686691 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4hr7\" (UniqueName: \"kubernetes.io/projected/ca8e5341-d155-47d3-97f9-2d4bf016a53f-kube-api-access-q4hr7\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.686852 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-utilities\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.686918 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-catalog-content\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.789363 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4hr7\" (UniqueName: \"kubernetes.io/projected/ca8e5341-d155-47d3-97f9-2d4bf016a53f-kube-api-access-q4hr7\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.789782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-utilities\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.789848 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-catalog-content\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.790397 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-utilities\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.790639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-catalog-content\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.821465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4hr7\" (UniqueName: \"kubernetes.io/projected/ca8e5341-d155-47d3-97f9-2d4bf016a53f-kube-api-access-q4hr7\") pod \"certified-operators-dxbn4\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:49 crc kubenswrapper[4853]: I0127 19:48:49.904138 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:50 crc kubenswrapper[4853]: I0127 19:48:50.512937 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxbn4"] Jan 27 19:48:50 crc kubenswrapper[4853]: W0127 19:48:50.522461 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8e5341_d155_47d3_97f9_2d4bf016a53f.slice/crio-cea84b4946d5950bb260d3c22a71622c10000a6fe39795ba2a6195a254915d6a WatchSource:0}: Error finding container cea84b4946d5950bb260d3c22a71622c10000a6fe39795ba2a6195a254915d6a: Status 404 returned error can't find the container with id cea84b4946d5950bb260d3c22a71622c10000a6fe39795ba2a6195a254915d6a Jan 27 19:48:51 crc kubenswrapper[4853]: I0127 19:48:51.064277 4853 generic.go:334] "Generic (PLEG): container finished" podID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerID="ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a" exitCode=0 Jan 27 19:48:51 crc kubenswrapper[4853]: I0127 19:48:51.064398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxbn4" event={"ID":"ca8e5341-d155-47d3-97f9-2d4bf016a53f","Type":"ContainerDied","Data":"ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a"} Jan 27 19:48:51 crc kubenswrapper[4853]: I0127 19:48:51.064593 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxbn4" event={"ID":"ca8e5341-d155-47d3-97f9-2d4bf016a53f","Type":"ContainerStarted","Data":"cea84b4946d5950bb260d3c22a71622c10000a6fe39795ba2a6195a254915d6a"} Jan 27 19:48:53 crc kubenswrapper[4853]: I0127 19:48:53.085515 4853 generic.go:334] "Generic (PLEG): container finished" podID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerID="e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf" exitCode=0 Jan 27 19:48:53 crc kubenswrapper[4853]: I0127 19:48:53.086060 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxbn4" event={"ID":"ca8e5341-d155-47d3-97f9-2d4bf016a53f","Type":"ContainerDied","Data":"e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf"} Jan 27 19:48:54 crc kubenswrapper[4853]: I0127 19:48:54.098057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxbn4" event={"ID":"ca8e5341-d155-47d3-97f9-2d4bf016a53f","Type":"ContainerStarted","Data":"650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc"} Jan 27 19:48:54 crc kubenswrapper[4853]: I0127 19:48:54.124282 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dxbn4" podStartSLOduration=2.7014201079999998 podStartE2EDuration="5.124256594s" podCreationTimestamp="2026-01-27 19:48:49 +0000 UTC" firstStartedPulling="2026-01-27 19:48:51.065683329 +0000 UTC m=+3973.528226212" lastFinishedPulling="2026-01-27 19:48:53.488519815 +0000 UTC m=+3975.951062698" observedRunningTime="2026-01-27 19:48:54.122661249 +0000 UTC m=+3976.585204132" watchObservedRunningTime="2026-01-27 19:48:54.124256594 +0000 UTC m=+3976.586799477" Jan 27 19:48:59 crc kubenswrapper[4853]: I0127 19:48:59.904501 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:48:59 crc kubenswrapper[4853]: I0127 19:48:59.905157 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:49:00 crc kubenswrapper[4853]: I0127 19:49:00.032639 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:49:00 crc kubenswrapper[4853]: I0127 19:49:00.223298 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:49:00 crc kubenswrapper[4853]: I0127 19:49:00.288344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxbn4"] Jan 27 19:49:02 crc kubenswrapper[4853]: I0127 19:49:02.199480 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dxbn4" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="registry-server" containerID="cri-o://650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc" gracePeriod=2 Jan 27 19:49:02 crc kubenswrapper[4853]: I0127 19:49:02.923093 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.094731 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-catalog-content\") pod \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.095698 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-utilities\") pod \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.095803 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4hr7\" (UniqueName: \"kubernetes.io/projected/ca8e5341-d155-47d3-97f9-2d4bf016a53f-kube-api-access-q4hr7\") pod \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\" (UID: \"ca8e5341-d155-47d3-97f9-2d4bf016a53f\") " Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.098804 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-utilities" (OuterVolumeSpecName: "utilities") pod "ca8e5341-d155-47d3-97f9-2d4bf016a53f" (UID: "ca8e5341-d155-47d3-97f9-2d4bf016a53f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.107707 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8e5341-d155-47d3-97f9-2d4bf016a53f-kube-api-access-q4hr7" (OuterVolumeSpecName: "kube-api-access-q4hr7") pod "ca8e5341-d155-47d3-97f9-2d4bf016a53f" (UID: "ca8e5341-d155-47d3-97f9-2d4bf016a53f"). InnerVolumeSpecName "kube-api-access-q4hr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.160875 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca8e5341-d155-47d3-97f9-2d4bf016a53f" (UID: "ca8e5341-d155-47d3-97f9-2d4bf016a53f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.201095 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.201151 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4hr7\" (UniqueName: \"kubernetes.io/projected/ca8e5341-d155-47d3-97f9-2d4bf016a53f-kube-api-access-q4hr7\") on node \"crc\" DevicePath \"\"" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.201168 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8e5341-d155-47d3-97f9-2d4bf016a53f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.215386 4853 generic.go:334] "Generic (PLEG): container finished" podID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerID="650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc" exitCode=0 Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.215463 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxbn4" event={"ID":"ca8e5341-d155-47d3-97f9-2d4bf016a53f","Type":"ContainerDied","Data":"650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc"} Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.215493 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxbn4" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.215523 4853 scope.go:117] "RemoveContainer" containerID="650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.215508 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxbn4" event={"ID":"ca8e5341-d155-47d3-97f9-2d4bf016a53f","Type":"ContainerDied","Data":"cea84b4946d5950bb260d3c22a71622c10000a6fe39795ba2a6195a254915d6a"} Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.241023 4853 scope.go:117] "RemoveContainer" containerID="e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.273370 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxbn4"] Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.282220 4853 scope.go:117] "RemoveContainer" containerID="ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.287790 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dxbn4"] Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.318253 4853 scope.go:117] "RemoveContainer" containerID="650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc" Jan 27 19:49:03 crc kubenswrapper[4853]: E0127 19:49:03.318837 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc\": container with ID starting with 650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc not found: ID does not exist" containerID="650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.318881 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc"} err="failed to get container status \"650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc\": rpc error: code = NotFound desc = could not find container \"650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc\": container with ID starting with 650fe66a65ddb4c23c77d6988365f5eef83d9fe9d973c0c9338d8bb0c26e1fdc not found: ID does not exist" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.318906 4853 scope.go:117] "RemoveContainer" containerID="e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf" Jan 27 19:49:03 crc kubenswrapper[4853]: E0127 19:49:03.319198 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf\": container with ID starting with e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf not found: ID does not exist" containerID="e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.319230 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf"} err="failed to get container status \"e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf\": rpc error: code = NotFound desc = could not find container \"e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf\": container with ID starting with e1f5bd34fe60cbb5774982c2c5869a593a50930242482394d58bb0335fbcafbf not found: ID does not exist" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.319255 4853 scope.go:117] "RemoveContainer" containerID="ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a" Jan 27 19:49:03 crc kubenswrapper[4853]: E0127 19:49:03.319574 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a\": container with ID starting with ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a not found: ID does not exist" containerID="ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a" Jan 27 19:49:03 crc kubenswrapper[4853]: I0127 19:49:03.319610 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a"} err="failed to get container status \"ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a\": rpc error: code = NotFound desc = could not find container \"ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a\": container with ID starting with ebb2635810b4f2a1d146f3b6e77bfc891ec33e2b1ae99c995e5710483e06856a not found: ID does not exist" Jan 27 19:49:04 crc kubenswrapper[4853]: I0127 19:49:04.126466 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" path="/var/lib/kubelet/pods/ca8e5341-d155-47d3-97f9-2d4bf016a53f/volumes" Jan 27 19:50:35 crc kubenswrapper[4853]: I0127 19:50:35.541103 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:50:35 crc kubenswrapper[4853]: I0127 19:50:35.541770 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.532111 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxfjx/must-gather-td98d"] Jan 27 19:50:38 crc kubenswrapper[4853]: E0127 19:50:38.533366 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="registry-server" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.533389 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="registry-server" Jan 27 19:50:38 crc kubenswrapper[4853]: E0127 19:50:38.533413 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="extract-content" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.533422 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="extract-content" Jan 27 19:50:38 crc kubenswrapper[4853]: E0127 19:50:38.533453 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="extract-utilities" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.533461 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="extract-utilities" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.533717 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8e5341-d155-47d3-97f9-2d4bf016a53f" containerName="registry-server" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.535133 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.538421 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sxfjx"/"openshift-service-ca.crt" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.538626 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sxfjx"/"kube-root-ca.crt" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.561705 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sxfjx/must-gather-td98d"] Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.714819 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77545fdc-17ea-4903-90d1-43a6820c8521-must-gather-output\") pod \"must-gather-td98d\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.714977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79xz\" (UniqueName: \"kubernetes.io/projected/77545fdc-17ea-4903-90d1-43a6820c8521-kube-api-access-s79xz\") pod \"must-gather-td98d\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.816576 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77545fdc-17ea-4903-90d1-43a6820c8521-must-gather-output\") pod \"must-gather-td98d\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.816699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79xz\" (UniqueName: \"kubernetes.io/projected/77545fdc-17ea-4903-90d1-43a6820c8521-kube-api-access-s79xz\") pod \"must-gather-td98d\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.817411 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77545fdc-17ea-4903-90d1-43a6820c8521-must-gather-output\") pod \"must-gather-td98d\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.835008 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79xz\" (UniqueName: \"kubernetes.io/projected/77545fdc-17ea-4903-90d1-43a6820c8521-kube-api-access-s79xz\") pod \"must-gather-td98d\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:38 crc kubenswrapper[4853]: I0127 19:50:38.877872 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:50:39 crc kubenswrapper[4853]: I0127 19:50:39.345544 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sxfjx/must-gather-td98d"] Jan 27 19:50:40 crc kubenswrapper[4853]: I0127 19:50:40.082024 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/must-gather-td98d" event={"ID":"77545fdc-17ea-4903-90d1-43a6820c8521","Type":"ContainerStarted","Data":"f179af1deffaf85085fe1d285f86cd59d68e8b32baa4e31528650248dcadf006"} Jan 27 19:50:40 crc kubenswrapper[4853]: I0127 19:50:40.082353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/must-gather-td98d" event={"ID":"77545fdc-17ea-4903-90d1-43a6820c8521","Type":"ContainerStarted","Data":"3f5a4041af6238ff9d2193bfbc386898433451851efe614483be4e5854aaff48"} Jan 27 19:50:40 crc kubenswrapper[4853]: I0127 19:50:40.082368 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/must-gather-td98d" event={"ID":"77545fdc-17ea-4903-90d1-43a6820c8521","Type":"ContainerStarted","Data":"d4123c8defa9e277140088ffb2ed38511a5798a997dd331356be0cdb7d1119d4"} Jan 27 19:50:40 crc kubenswrapper[4853]: I0127 19:50:40.102336 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sxfjx/must-gather-td98d" podStartSLOduration=2.102310509 podStartE2EDuration="2.102310509s" podCreationTimestamp="2026-01-27 19:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:50:40.096423121 +0000 UTC m=+4082.558966004" watchObservedRunningTime="2026-01-27 19:50:40.102310509 +0000 UTC m=+4082.564853392" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.359361 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-z765t"] Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.360891 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.363558 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sxfjx"/"default-dockercfg-kj2ph" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.517432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-host\") pod \"crc-debug-z765t\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.517782 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxbx\" (UniqueName: \"kubernetes.io/projected/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-kube-api-access-wdxbx\") pod \"crc-debug-z765t\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.619395 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdxbx\" (UniqueName: \"kubernetes.io/projected/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-kube-api-access-wdxbx\") pod \"crc-debug-z765t\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.619631 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-host\") pod \"crc-debug-z765t\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.619764 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-host\") pod \"crc-debug-z765t\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.645955 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdxbx\" (UniqueName: \"kubernetes.io/projected/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-kube-api-access-wdxbx\") pod \"crc-debug-z765t\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: I0127 19:50:43.686453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:50:43 crc kubenswrapper[4853]: W0127 19:50:43.726620 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c23787_f21b_4b8f_9a8b_2a54ee69266f.slice/crio-376b88f201e4fed693fed304e528db6358f1ceb337bcb3a85c9d0c4aa1a1f4a4 WatchSource:0}: Error finding container 376b88f201e4fed693fed304e528db6358f1ceb337bcb3a85c9d0c4aa1a1f4a4: Status 404 returned error can't find the container with id 376b88f201e4fed693fed304e528db6358f1ceb337bcb3a85c9d0c4aa1a1f4a4 Jan 27 19:50:44 crc kubenswrapper[4853]: I0127 19:50:44.127144 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-z765t" event={"ID":"a8c23787-f21b-4b8f-9a8b-2a54ee69266f","Type":"ContainerStarted","Data":"d1a5407a70290142e65dbbdc89a3727138fde60b9a88e710e1445a2352081114"} Jan 27 19:50:44 crc kubenswrapper[4853]: I0127 19:50:44.127601 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-z765t" event={"ID":"a8c23787-f21b-4b8f-9a8b-2a54ee69266f","Type":"ContainerStarted","Data":"376b88f201e4fed693fed304e528db6358f1ceb337bcb3a85c9d0c4aa1a1f4a4"} Jan 27 19:50:44 crc kubenswrapper[4853]: I0127 19:50:44.155309 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sxfjx/crc-debug-z765t" podStartSLOduration=1.155278842 podStartE2EDuration="1.155278842s" podCreationTimestamp="2026-01-27 19:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:50:44.144365361 +0000 UTC m=+4086.606908244" watchObservedRunningTime="2026-01-27 19:50:44.155278842 +0000 UTC m=+4086.617821715" Jan 27 19:51:05 crc kubenswrapper[4853]: I0127 19:51:05.541588 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:51:05 crc kubenswrapper[4853]: I0127 19:51:05.542576 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.811068 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xd595"] Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.814420 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.824039 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xd595"] Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.880943 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-catalog-content\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.881036 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jppb\" (UniqueName: \"kubernetes.io/projected/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-kube-api-access-6jppb\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.881107 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-utilities\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.983829 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-catalog-content\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.983917 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jppb\" (UniqueName: \"kubernetes.io/projected/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-kube-api-access-6jppb\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.983975 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-utilities\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.984728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-utilities\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:06 crc kubenswrapper[4853]: I0127 19:51:06.985077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-catalog-content\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:07 crc kubenswrapper[4853]: I0127 19:51:07.007839 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jppb\" (UniqueName: \"kubernetes.io/projected/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-kube-api-access-6jppb\") pod \"redhat-marketplace-xd595\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:07 crc kubenswrapper[4853]: I0127 19:51:07.153178 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:07 crc kubenswrapper[4853]: I0127 19:51:07.746187 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xd595"] Jan 27 19:51:08 crc kubenswrapper[4853]: I0127 19:51:08.360200 4853 generic.go:334] "Generic (PLEG): container finished" podID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerID="32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de" exitCode=0 Jan 27 19:51:08 crc kubenswrapper[4853]: I0127 19:51:08.360258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xd595" event={"ID":"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f","Type":"ContainerDied","Data":"32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de"} Jan 27 19:51:08 crc kubenswrapper[4853]: I0127 19:51:08.360295 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xd595" event={"ID":"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f","Type":"ContainerStarted","Data":"c815b04c470dbdeb5c3bc0cefa0d945b0c7f37b9911dd4137846bc6338e76fe0"} Jan 27 19:51:10 crc kubenswrapper[4853]: I0127 19:51:10.381681 4853 generic.go:334] "Generic (PLEG): container finished" podID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerID="2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b" exitCode=0 Jan 27 19:51:10 crc kubenswrapper[4853]: I0127 19:51:10.381742 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xd595" event={"ID":"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f","Type":"ContainerDied","Data":"2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b"} Jan 27 19:51:11 crc kubenswrapper[4853]: I0127 19:51:11.396161 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xd595" event={"ID":"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f","Type":"ContainerStarted","Data":"5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091"} Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.379433 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xd595" podStartSLOduration=5.744252298 podStartE2EDuration="8.379409569s" podCreationTimestamp="2026-01-27 19:51:06 +0000 UTC" firstStartedPulling="2026-01-27 19:51:08.362664166 +0000 UTC m=+4110.825207049" lastFinishedPulling="2026-01-27 19:51:10.997821437 +0000 UTC m=+4113.460364320" observedRunningTime="2026-01-27 19:51:11.42164137 +0000 UTC m=+4113.884184263" watchObservedRunningTime="2026-01-27 19:51:14.379409569 +0000 UTC m=+4116.841952452" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.386289 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vnr5k"] Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.389281 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.410550 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnr5k"] Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.469357 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-utilities\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.469456 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ktrj\" (UniqueName: \"kubernetes.io/projected/3f684593-40f5-4a61-8cac-a59c4b256f52-kube-api-access-5ktrj\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.469506 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-catalog-content\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.571520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ktrj\" (UniqueName: \"kubernetes.io/projected/3f684593-40f5-4a61-8cac-a59c4b256f52-kube-api-access-5ktrj\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.571646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-catalog-content\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.571806 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-utilities\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.572399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-catalog-content\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.572476 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-utilities\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.624287 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ktrj\" (UniqueName: \"kubernetes.io/projected/3f684593-40f5-4a61-8cac-a59c4b256f52-kube-api-access-5ktrj\") pod \"community-operators-vnr5k\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:14 crc kubenswrapper[4853]: I0127 19:51:14.712453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:15 crc kubenswrapper[4853]: I0127 19:51:15.409077 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnr5k"] Jan 27 19:51:15 crc kubenswrapper[4853]: I0127 19:51:15.436669 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerStarted","Data":"5d4a6374d9616f9cf5ead77445912aeb398e9269f0b1ad51422f8163fb1ba30c"} Jan 27 19:51:16 crc kubenswrapper[4853]: I0127 19:51:16.454716 4853 generic.go:334] "Generic (PLEG): container finished" podID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerID="9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76" exitCode=0 Jan 27 19:51:16 crc kubenswrapper[4853]: I0127 19:51:16.454881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerDied","Data":"9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76"} Jan 27 19:51:16 crc kubenswrapper[4853]: I0127 19:51:16.461355 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:51:17 crc kubenswrapper[4853]: I0127 19:51:17.153820 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:17 crc kubenswrapper[4853]: I0127 19:51:17.154251 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:17 crc kubenswrapper[4853]: I0127 19:51:17.210019 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:17 crc kubenswrapper[4853]: I0127 19:51:17.523710 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:18 crc kubenswrapper[4853]: I0127 19:51:18.500725 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerStarted","Data":"bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf"} Jan 27 19:51:19 crc kubenswrapper[4853]: I0127 19:51:19.513202 4853 generic.go:334] "Generic (PLEG): container finished" podID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerID="bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf" exitCode=0 Jan 27 19:51:19 crc kubenswrapper[4853]: I0127 19:51:19.513258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerDied","Data":"bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf"} Jan 27 19:51:19 crc kubenswrapper[4853]: I0127 19:51:19.581471 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xd595"] Jan 27 19:51:19 crc kubenswrapper[4853]: I0127 19:51:19.581769 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xd595" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="registry-server" containerID="cri-o://5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091" gracePeriod=2 Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.159349 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.213911 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jppb\" (UniqueName: \"kubernetes.io/projected/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-kube-api-access-6jppb\") pod \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.214285 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-catalog-content\") pod \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.214318 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-utilities\") pod \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\" (UID: \"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f\") " Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.215533 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-utilities" (OuterVolumeSpecName: "utilities") pod "f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" (UID: "f2b306e7-ccce-41a5-a1ea-5ecda1cf817f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.229289 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-kube-api-access-6jppb" (OuterVolumeSpecName: "kube-api-access-6jppb") pod "f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" (UID: "f2b306e7-ccce-41a5-a1ea-5ecda1cf817f"). InnerVolumeSpecName "kube-api-access-6jppb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.289978 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" (UID: "f2b306e7-ccce-41a5-a1ea-5ecda1cf817f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.316946 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.316992 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.317005 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jppb\" (UniqueName: \"kubernetes.io/projected/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f-kube-api-access-6jppb\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.531758 4853 generic.go:334] "Generic (PLEG): container finished" podID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerID="5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091" exitCode=0 Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.531984 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xd595" event={"ID":"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f","Type":"ContainerDied","Data":"5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091"} Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.533031 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xd595" event={"ID":"f2b306e7-ccce-41a5-a1ea-5ecda1cf817f","Type":"ContainerDied","Data":"c815b04c470dbdeb5c3bc0cefa0d945b0c7f37b9911dd4137846bc6338e76fe0"} Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.532222 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xd595" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.533104 4853 scope.go:117] "RemoveContainer" containerID="5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.580309 4853 scope.go:117] "RemoveContainer" containerID="2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.592949 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xd595"] Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.613915 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xd595"] Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.624927 4853 scope.go:117] "RemoveContainer" containerID="32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.665095 4853 scope.go:117] "RemoveContainer" containerID="5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091" Jan 27 19:51:20 crc kubenswrapper[4853]: E0127 19:51:20.667180 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091\": container with ID starting with 5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091 not found: ID does not exist" containerID="5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.667461 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091"} err="failed to get container status \"5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091\": rpc error: code = NotFound desc = could not find container \"5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091\": container with ID starting with 5dd28804045ecf25d043b9ca0f359fda3f99ad0538cf34a74180e1b854521091 not found: ID does not exist" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.667654 4853 scope.go:117] "RemoveContainer" containerID="2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b" Jan 27 19:51:20 crc kubenswrapper[4853]: E0127 19:51:20.668342 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b\": container with ID starting with 2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b not found: ID does not exist" containerID="2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.668536 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b"} err="failed to get container status \"2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b\": rpc error: code = NotFound desc = could not find container \"2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b\": container with ID starting with 2f573f979442b2b39bc65f80f06ca6886e539f019c102b6752de13e5d1df349b not found: ID does not exist" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.668688 4853 scope.go:117] "RemoveContainer" containerID="32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de" Jan 27 19:51:20 crc kubenswrapper[4853]: E0127 19:51:20.669494 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de\": container with ID starting with 32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de not found: ID does not exist" containerID="32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de" Jan 27 19:51:20 crc kubenswrapper[4853]: I0127 19:51:20.669568 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de"} err="failed to get container status \"32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de\": rpc error: code = NotFound desc = could not find container \"32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de\": container with ID starting with 32f3647cc8722c9a1345b28a144c9abaf9d4722deea4805df2e9007e08b6d3de not found: ID does not exist" Jan 27 19:51:21 crc kubenswrapper[4853]: I0127 19:51:21.575608 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerStarted","Data":"abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29"} Jan 27 19:51:21 crc kubenswrapper[4853]: I0127 19:51:21.620400 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vnr5k" podStartSLOduration=3.433986768 podStartE2EDuration="7.620373585s" podCreationTimestamp="2026-01-27 19:51:14 +0000 UTC" firstStartedPulling="2026-01-27 19:51:16.461048048 +0000 UTC m=+4118.923590931" lastFinishedPulling="2026-01-27 19:51:20.647434865 +0000 UTC m=+4123.109977748" observedRunningTime="2026-01-27 19:51:21.612327686 +0000 UTC m=+4124.074870569" watchObservedRunningTime="2026-01-27 19:51:21.620373585 +0000 UTC m=+4124.082916468" Jan 27 19:51:22 crc kubenswrapper[4853]: I0127 19:51:22.132473 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" path="/var/lib/kubelet/pods/f2b306e7-ccce-41a5-a1ea-5ecda1cf817f/volumes" Jan 27 19:51:24 crc kubenswrapper[4853]: I0127 19:51:24.610864 4853 generic.go:334] "Generic (PLEG): container finished" podID="a8c23787-f21b-4b8f-9a8b-2a54ee69266f" containerID="d1a5407a70290142e65dbbdc89a3727138fde60b9a88e710e1445a2352081114" exitCode=0 Jan 27 19:51:24 crc kubenswrapper[4853]: I0127 19:51:24.610972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-z765t" event={"ID":"a8c23787-f21b-4b8f-9a8b-2a54ee69266f","Type":"ContainerDied","Data":"d1a5407a70290142e65dbbdc89a3727138fde60b9a88e710e1445a2352081114"} Jan 27 19:51:24 crc kubenswrapper[4853]: I0127 19:51:24.713491 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:24 crc kubenswrapper[4853]: I0127 19:51:24.713654 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:24 crc kubenswrapper[4853]: I0127 19:51:24.763462 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.686847 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.754523 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.802209 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-z765t"] Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.821899 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-z765t"] Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.944781 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-host\") pod \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.944862 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdxbx\" (UniqueName: \"kubernetes.io/projected/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-kube-api-access-wdxbx\") pod \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\" (UID: \"a8c23787-f21b-4b8f-9a8b-2a54ee69266f\") " Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.944927 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-host" (OuterVolumeSpecName: "host") pod "a8c23787-f21b-4b8f-9a8b-2a54ee69266f" (UID: "a8c23787-f21b-4b8f-9a8b-2a54ee69266f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.945463 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.954064 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-kube-api-access-wdxbx" (OuterVolumeSpecName: "kube-api-access-wdxbx") pod "a8c23787-f21b-4b8f-9a8b-2a54ee69266f" (UID: "a8c23787-f21b-4b8f-9a8b-2a54ee69266f"). InnerVolumeSpecName "kube-api-access-wdxbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:51:25 crc kubenswrapper[4853]: I0127 19:51:25.975309 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnr5k"] Jan 27 19:51:26 crc kubenswrapper[4853]: I0127 19:51:26.048444 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdxbx\" (UniqueName: \"kubernetes.io/projected/a8c23787-f21b-4b8f-9a8b-2a54ee69266f-kube-api-access-wdxbx\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:26 crc kubenswrapper[4853]: I0127 19:51:26.141839 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c23787-f21b-4b8f-9a8b-2a54ee69266f" path="/var/lib/kubelet/pods/a8c23787-f21b-4b8f-9a8b-2a54ee69266f/volumes" Jan 27 19:51:26 crc kubenswrapper[4853]: I0127 19:51:26.638438 4853 scope.go:117] "RemoveContainer" containerID="d1a5407a70290142e65dbbdc89a3727138fde60b9a88e710e1445a2352081114" Jan 27 19:51:26 crc kubenswrapper[4853]: I0127 19:51:26.638926 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-z765t" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.073696 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-jhtfs"] Jan 27 19:51:27 crc kubenswrapper[4853]: E0127 19:51:27.074303 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="extract-content" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.074318 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="extract-content" Jan 27 19:51:27 crc kubenswrapper[4853]: E0127 19:51:27.074347 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="extract-utilities" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.074353 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="extract-utilities" Jan 27 19:51:27 crc kubenswrapper[4853]: E0127 19:51:27.074369 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c23787-f21b-4b8f-9a8b-2a54ee69266f" containerName="container-00" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.074377 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c23787-f21b-4b8f-9a8b-2a54ee69266f" containerName="container-00" Jan 27 19:51:27 crc kubenswrapper[4853]: E0127 19:51:27.074393 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="registry-server" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.074399 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="registry-server" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.074587 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b306e7-ccce-41a5-a1ea-5ecda1cf817f" containerName="registry-server" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.074603 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c23787-f21b-4b8f-9a8b-2a54ee69266f" containerName="container-00" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.075445 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.077980 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sxfjx"/"default-dockercfg-kj2ph" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.171818 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-host\") pod \"crc-debug-jhtfs\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.171887 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bf6b\" (UniqueName: \"kubernetes.io/projected/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-kube-api-access-4bf6b\") pod \"crc-debug-jhtfs\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.274311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-host\") pod \"crc-debug-jhtfs\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.274706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bf6b\" (UniqueName: \"kubernetes.io/projected/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-kube-api-access-4bf6b\") pod \"crc-debug-jhtfs\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.274543 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-host\") pod \"crc-debug-jhtfs\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.295904 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bf6b\" (UniqueName: \"kubernetes.io/projected/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-kube-api-access-4bf6b\") pod \"crc-debug-jhtfs\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.398028 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.655295 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" event={"ID":"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6","Type":"ContainerStarted","Data":"42db8c6931a184a5615a7c945afc7514ea1e08be0e8d8a930a45522f8cd4ec59"} Jan 27 19:51:27 crc kubenswrapper[4853]: I0127 19:51:27.655498 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vnr5k" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="registry-server" containerID="cri-o://abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29" gracePeriod=2 Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.203696 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.300415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-utilities\") pod \"3f684593-40f5-4a61-8cac-a59c4b256f52\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.300525 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ktrj\" (UniqueName: \"kubernetes.io/projected/3f684593-40f5-4a61-8cac-a59c4b256f52-kube-api-access-5ktrj\") pod \"3f684593-40f5-4a61-8cac-a59c4b256f52\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.300745 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-catalog-content\") pod \"3f684593-40f5-4a61-8cac-a59c4b256f52\" (UID: \"3f684593-40f5-4a61-8cac-a59c4b256f52\") " Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.303077 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-utilities" (OuterVolumeSpecName: "utilities") pod "3f684593-40f5-4a61-8cac-a59c4b256f52" (UID: "3f684593-40f5-4a61-8cac-a59c4b256f52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.310487 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f684593-40f5-4a61-8cac-a59c4b256f52-kube-api-access-5ktrj" (OuterVolumeSpecName: "kube-api-access-5ktrj") pod "3f684593-40f5-4a61-8cac-a59c4b256f52" (UID: "3f684593-40f5-4a61-8cac-a59c4b256f52"). InnerVolumeSpecName "kube-api-access-5ktrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.375044 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f684593-40f5-4a61-8cac-a59c4b256f52" (UID: "3f684593-40f5-4a61-8cac-a59c4b256f52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.403077 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.403138 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f684593-40f5-4a61-8cac-a59c4b256f52-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.403155 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ktrj\" (UniqueName: \"kubernetes.io/projected/3f684593-40f5-4a61-8cac-a59c4b256f52-kube-api-access-5ktrj\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.666289 4853 generic.go:334] "Generic (PLEG): container finished" podID="d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" containerID="7ed2f68a2a1ea0fa96a2af49207ea238b5207308d1290f2ca6ca5f7c1fe02b47" exitCode=0 Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.666347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" event={"ID":"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6","Type":"ContainerDied","Data":"7ed2f68a2a1ea0fa96a2af49207ea238b5207308d1290f2ca6ca5f7c1fe02b47"} Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.674461 4853 generic.go:334] "Generic (PLEG): container finished" podID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerID="abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29" exitCode=0 Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.674519 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerDied","Data":"abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29"} Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.674547 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnr5k" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.674575 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnr5k" event={"ID":"3f684593-40f5-4a61-8cac-a59c4b256f52","Type":"ContainerDied","Data":"5d4a6374d9616f9cf5ead77445912aeb398e9269f0b1ad51422f8163fb1ba30c"} Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.674602 4853 scope.go:117] "RemoveContainer" containerID="abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.702887 4853 scope.go:117] "RemoveContainer" containerID="bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.738496 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnr5k"] Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.759240 4853 scope.go:117] "RemoveContainer" containerID="9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.767741 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vnr5k"] Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.812188 4853 scope.go:117] "RemoveContainer" containerID="abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29" Jan 27 19:51:28 crc kubenswrapper[4853]: E0127 19:51:28.812943 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29\": container with ID starting with abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29 not found: ID does not exist" containerID="abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.813016 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29"} err="failed to get container status \"abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29\": rpc error: code = NotFound desc = could not find container \"abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29\": container with ID starting with abd7cb365a386cfdfe41022c70dd95a7283b05642c4cfc62f739c2427691eb29 not found: ID does not exist" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.813056 4853 scope.go:117] "RemoveContainer" containerID="bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf" Jan 27 19:51:28 crc kubenswrapper[4853]: E0127 19:51:28.813672 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf\": container with ID starting with bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf not found: ID does not exist" containerID="bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.813704 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf"} err="failed to get container status \"bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf\": rpc error: code = NotFound desc = could not find container \"bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf\": container with ID starting with bd0d0e18493a004497423e58f95d36fd1379fb56d89eb2e158ebde28db549cbf not found: ID does not exist" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.813730 4853 scope.go:117] "RemoveContainer" containerID="9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76" Jan 27 19:51:28 crc kubenswrapper[4853]: E0127 19:51:28.814187 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76\": container with ID starting with 9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76 not found: ID does not exist" containerID="9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76" Jan 27 19:51:28 crc kubenswrapper[4853]: I0127 19:51:28.814243 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76"} err="failed to get container status \"9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76\": rpc error: code = NotFound desc = could not find container \"9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76\": container with ID starting with 9440a9c637d5dd3f7650900fd4b712fbf033d726c854a99c019780804269fb76 not found: ID does not exist" Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.110485 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-jhtfs"] Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.135801 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-jhtfs"] Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.801522 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.936592 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bf6b\" (UniqueName: \"kubernetes.io/projected/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-kube-api-access-4bf6b\") pod \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.936776 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-host\") pod \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\" (UID: \"d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6\") " Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.936949 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-host" (OuterVolumeSpecName: "host") pod "d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" (UID: "d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.937357 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:29 crc kubenswrapper[4853]: I0127 19:51:29.942771 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-kube-api-access-4bf6b" (OuterVolumeSpecName: "kube-api-access-4bf6b") pod "d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" (UID: "d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6"). InnerVolumeSpecName "kube-api-access-4bf6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.039389 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bf6b\" (UniqueName: \"kubernetes.io/projected/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6-kube-api-access-4bf6b\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.125806 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" path="/var/lib/kubelet/pods/3f684593-40f5-4a61-8cac-a59c4b256f52/volumes" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.126878 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" path="/var/lib/kubelet/pods/d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6/volumes" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.380451 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-4rwj2"] Jan 27 19:51:30 crc kubenswrapper[4853]: E0127 19:51:30.381044 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="extract-content" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.381064 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="extract-content" Jan 27 19:51:30 crc kubenswrapper[4853]: E0127 19:51:30.381079 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="extract-utilities" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.381087 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="extract-utilities" Jan 27 19:51:30 crc kubenswrapper[4853]: E0127 19:51:30.381153 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" containerName="container-00" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.381161 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" containerName="container-00" Jan 27 19:51:30 crc kubenswrapper[4853]: E0127 19:51:30.381185 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="registry-server" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.381193 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="registry-server" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.381431 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f684593-40f5-4a61-8cac-a59c4b256f52" containerName="registry-server" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.381471 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f9b0c3-dbcc-4ff9-9f15-d7a5bd143ef6" containerName="container-00" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.382388 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.550497 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxsrg\" (UniqueName: \"kubernetes.io/projected/70c2c2bb-5c51-45be-9f99-487e64108cdb-kube-api-access-zxsrg\") pod \"crc-debug-4rwj2\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.550568 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70c2c2bb-5c51-45be-9f99-487e64108cdb-host\") pod \"crc-debug-4rwj2\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.652849 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxsrg\" (UniqueName: \"kubernetes.io/projected/70c2c2bb-5c51-45be-9f99-487e64108cdb-kube-api-access-zxsrg\") pod \"crc-debug-4rwj2\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.652935 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70c2c2bb-5c51-45be-9f99-487e64108cdb-host\") pod \"crc-debug-4rwj2\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.653075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70c2c2bb-5c51-45be-9f99-487e64108cdb-host\") pod \"crc-debug-4rwj2\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.683715 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxsrg\" (UniqueName: \"kubernetes.io/projected/70c2c2bb-5c51-45be-9f99-487e64108cdb-kube-api-access-zxsrg\") pod \"crc-debug-4rwj2\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.706565 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.722630 4853 scope.go:117] "RemoveContainer" containerID="7ed2f68a2a1ea0fa96a2af49207ea238b5207308d1290f2ca6ca5f7c1fe02b47" Jan 27 19:51:30 crc kubenswrapper[4853]: I0127 19:51:30.722898 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-jhtfs" Jan 27 19:51:31 crc kubenswrapper[4853]: I0127 19:51:31.738551 4853 generic.go:334] "Generic (PLEG): container finished" podID="70c2c2bb-5c51-45be-9f99-487e64108cdb" containerID="599ad6c498abb57cab6e9dea4201f8bde3a073f22c3a76821b37c6cc3b4a1421" exitCode=0 Jan 27 19:51:31 crc kubenswrapper[4853]: I0127 19:51:31.738644 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" event={"ID":"70c2c2bb-5c51-45be-9f99-487e64108cdb","Type":"ContainerDied","Data":"599ad6c498abb57cab6e9dea4201f8bde3a073f22c3a76821b37c6cc3b4a1421"} Jan 27 19:51:31 crc kubenswrapper[4853]: I0127 19:51:31.739061 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" event={"ID":"70c2c2bb-5c51-45be-9f99-487e64108cdb","Type":"ContainerStarted","Data":"7c40bea5e7d3d859041a2899105fe64bf851be482e1349b4e24b54d40b55ae63"} Jan 27 19:51:31 crc kubenswrapper[4853]: I0127 19:51:31.788783 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-4rwj2"] Jan 27 19:51:31 crc kubenswrapper[4853]: I0127 19:51:31.802999 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxfjx/crc-debug-4rwj2"] Jan 27 19:51:32 crc kubenswrapper[4853]: I0127 19:51:32.857243 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.005481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxsrg\" (UniqueName: \"kubernetes.io/projected/70c2c2bb-5c51-45be-9f99-487e64108cdb-kube-api-access-zxsrg\") pod \"70c2c2bb-5c51-45be-9f99-487e64108cdb\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.005550 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70c2c2bb-5c51-45be-9f99-487e64108cdb-host\") pod \"70c2c2bb-5c51-45be-9f99-487e64108cdb\" (UID: \"70c2c2bb-5c51-45be-9f99-487e64108cdb\") " Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.005764 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2c2bb-5c51-45be-9f99-487e64108cdb-host" (OuterVolumeSpecName: "host") pod "70c2c2bb-5c51-45be-9f99-487e64108cdb" (UID: "70c2c2bb-5c51-45be-9f99-487e64108cdb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.006140 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70c2c2bb-5c51-45be-9f99-487e64108cdb-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.495382 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c2c2bb-5c51-45be-9f99-487e64108cdb-kube-api-access-zxsrg" (OuterVolumeSpecName: "kube-api-access-zxsrg") pod "70c2c2bb-5c51-45be-9f99-487e64108cdb" (UID: "70c2c2bb-5c51-45be-9f99-487e64108cdb"). InnerVolumeSpecName "kube-api-access-zxsrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.516252 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxsrg\" (UniqueName: \"kubernetes.io/projected/70c2c2bb-5c51-45be-9f99-487e64108cdb-kube-api-access-zxsrg\") on node \"crc\" DevicePath \"\"" Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.760574 4853 scope.go:117] "RemoveContainer" containerID="599ad6c498abb57cab6e9dea4201f8bde3a073f22c3a76821b37c6cc3b4a1421" Jan 27 19:51:33 crc kubenswrapper[4853]: I0127 19:51:33.760648 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/crc-debug-4rwj2" Jan 27 19:51:34 crc kubenswrapper[4853]: I0127 19:51:34.126640 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c2c2bb-5c51-45be-9f99-487e64108cdb" path="/var/lib/kubelet/pods/70c2c2bb-5c51-45be-9f99-487e64108cdb/volumes" Jan 27 19:51:35 crc kubenswrapper[4853]: I0127 19:51:35.541591 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:51:35 crc kubenswrapper[4853]: I0127 19:51:35.541932 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:51:35 crc kubenswrapper[4853]: I0127 19:51:35.541992 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:51:35 crc kubenswrapper[4853]: I0127 19:51:35.543860 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"419f0caa718cb6b8323adeb425867a293b3b008799642768d85dc8419e85b6b1"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:51:35 crc kubenswrapper[4853]: I0127 19:51:35.543941 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://419f0caa718cb6b8323adeb425867a293b3b008799642768d85dc8419e85b6b1" gracePeriod=600 Jan 27 19:51:36 crc kubenswrapper[4853]: I0127 19:51:36.810293 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="419f0caa718cb6b8323adeb425867a293b3b008799642768d85dc8419e85b6b1" exitCode=0 Jan 27 19:51:36 crc kubenswrapper[4853]: I0127 19:51:36.810426 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"419f0caa718cb6b8323adeb425867a293b3b008799642768d85dc8419e85b6b1"} Jan 27 19:51:36 crc kubenswrapper[4853]: I0127 19:51:36.810919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerStarted","Data":"1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2"} Jan 27 19:51:36 crc kubenswrapper[4853]: I0127 19:51:36.810958 4853 scope.go:117] "RemoveContainer" containerID="326a72bf85c07d6d0eec5a967b1feeaa73cf47af49f41769bb0b175d310c1432" Jan 27 19:52:11 crc kubenswrapper[4853]: I0127 19:52:11.838038 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57cbf989c8-gmwvx_10b90707-26fd-41f4-b020-0458facda8ba/barbican-api/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.042672 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57cbf989c8-gmwvx_10b90707-26fd-41f4-b020-0458facda8ba/barbican-api-log/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.043861 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-757c6cc6c8-b7v22_d2f2b676-e83e-4107-9cce-525426cd6cbc/barbican-keystone-listener/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.129372 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-757c6cc6c8-b7v22_d2f2b676-e83e-4107-9cce-525426cd6cbc/barbican-keystone-listener-log/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.257269 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5dd7f58c-gdxtv_d47c94df-0d90-409c-8bd4-2a237d641021/barbican-worker/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.289767 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5dd7f58c-gdxtv_d47c94df-0d90-409c-8bd4-2a237d641021/barbican-worker-log/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.529619 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-28nw4_7f4e6043-7a79-455d-97be-20aff374a38d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.575633 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/ceilometer-central-agent/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.752055 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/proxy-httpd/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.782109 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/ceilometer-notification-agent/0.log" Jan 27 19:52:12 crc kubenswrapper[4853]: I0127 19:52:12.825753 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_03c6fb37-6ad9-412a-b0fc-851c7b5e4a89/sg-core/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.064354 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdc98336-c980-4a4c-b453-fb72f6d34185/cinder-api-log/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.078869 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdc98336-c980-4a4c-b453-fb72f6d34185/cinder-api/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.186578 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13916d35-368a-417b-bfea-4f82d71797c3/cinder-scheduler/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.301314 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13916d35-368a-417b-bfea-4f82d71797c3/probe/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.354187 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-t84cd_b48e7be3-8341-4d63-bb9e-3b665b27591b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.516112 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7n2cz_d3496093-310a-422a-a09c-d796470ad2c0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.610786 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hjhl4_7225b878-e91a-4d57-8f13-19de93bd506d/init/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.815826 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hjhl4_7225b878-e91a-4d57-8f13-19de93bd506d/init/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.874003 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-szg9m_e4809563-3f03-4361-9794-87f5705115b8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:13 crc kubenswrapper[4853]: I0127 19:52:13.881986 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hjhl4_7225b878-e91a-4d57-8f13-19de93bd506d/dnsmasq-dns/0.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.312041 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1ea8e822-c78e-4fc2-8afe-09c0ef609d47/glance-log/0.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.315696 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1ea8e822-c78e-4fc2-8afe-09c0ef609d47/glance-httpd/0.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.511078 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_40f9ab82-cf2e-4b60-bcfc-a41137752ef7/glance-log/0.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.572424 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_40f9ab82-cf2e-4b60-bcfc-a41137752ef7/glance-httpd/0.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.701142 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69967664fb-pbqhr_66d621f7-387b-470d-8e42-bebbfada3bbc/horizon/1.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.895754 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69967664fb-pbqhr_66d621f7-387b-470d-8e42-bebbfada3bbc/horizon/0.log" Jan 27 19:52:14 crc kubenswrapper[4853]: I0127 19:52:14.981104 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s4g5b_bbb5fe03-6098-4e03-ab85-5a28e090f13c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:15 crc kubenswrapper[4853]: I0127 19:52:15.199694 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69967664fb-pbqhr_66d621f7-387b-470d-8e42-bebbfada3bbc/horizon-log/0.log" Jan 27 19:52:15 crc kubenswrapper[4853]: I0127 19:52:15.254164 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dcnnf_f23ab0fa-bd1a-4494-a7fe-428f0b8ea536/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:15 crc kubenswrapper[4853]: I0127 19:52:15.476402 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492341-snj9s_7d4283a6-8ac9-4d5d-9a33-c753064f6930/keystone-cron/0.log" Jan 27 19:52:15 crc kubenswrapper[4853]: I0127 19:52:15.580188 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54f5975d7b-jvtmz_b033907b-77e1-47e8-8921-6cb6e40f5f06/keystone-api/0.log" Jan 27 19:52:15 crc kubenswrapper[4853]: I0127 19:52:15.735107 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c6327a68-b665-423b-85ed-3b1a4d3ffaa2/kube-state-metrics/0.log" Jan 27 19:52:15 crc kubenswrapper[4853]: I0127 19:52:15.812139 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hk5l9_91e90160-3a76-416b-a3e6-cf5d105f892d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:16 crc kubenswrapper[4853]: I0127 19:52:16.198698 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64c8bd57d9-g88k8_911dc005-42f8-4086-9ee9-04490f7120f4/neutron-httpd/0.log" Jan 27 19:52:16 crc kubenswrapper[4853]: I0127 19:52:16.207632 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64c8bd57d9-g88k8_911dc005-42f8-4086-9ee9-04490f7120f4/neutron-api/0.log" Jan 27 19:52:16 crc kubenswrapper[4853]: I0127 19:52:16.339844 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkndk_149036fd-39f5-4bd0-a585-f495af3a55d1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:16 crc kubenswrapper[4853]: I0127 19:52:16.936245 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a649598f-69be-4de2-9a79-b5581f1fc8f9/nova-api-log/0.log" Jan 27 19:52:17 crc kubenswrapper[4853]: I0127 19:52:17.007558 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d238c8e7-40ad-4834-8af2-0d942d49852a/nova-cell0-conductor-conductor/0.log" Jan 27 19:52:17 crc kubenswrapper[4853]: I0127 19:52:17.459980 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0f9c3933-7f75-4c32-95e2-bac827abcb76/nova-cell1-conductor-conductor/0.log" Jan 27 19:52:17 crc kubenswrapper[4853]: I0127 19:52:17.484707 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_eb83c723-2f1b-419a-bd58-51e56534cb23/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 19:52:17 crc kubenswrapper[4853]: I0127 19:52:17.528013 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a649598f-69be-4de2-9a79-b5581f1fc8f9/nova-api-api/0.log" Jan 27 19:52:17 crc kubenswrapper[4853]: I0127 19:52:17.753308 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l8ptd_8b54da38-cda9-486f-bb52-e18ebfa81cc8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:17 crc kubenswrapper[4853]: I0127 19:52:17.837499 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_1fb249c2-c72b-4f50-bee6-8d461fc5b613/nova-metadata-log/0.log" Jan 27 19:52:18 crc kubenswrapper[4853]: I0127 19:52:18.261716 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbf533bd-2499-4724-b558-cf94c7017f3d/mysql-bootstrap/0.log" Jan 27 19:52:18 crc kubenswrapper[4853]: I0127 19:52:18.354650 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_28787444-e1bd-43c7-a22c-f3ce3678986d/nova-scheduler-scheduler/0.log" Jan 27 19:52:18 crc kubenswrapper[4853]: I0127 19:52:18.964115 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbf533bd-2499-4724-b558-cf94c7017f3d/mysql-bootstrap/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.068211 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_dbf533bd-2499-4724-b558-cf94c7017f3d/galera/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.180733 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ccbab76c-f034-4f3b-9dfe-fcaf98d45d87/mysql-bootstrap/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.435674 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ccbab76c-f034-4f3b-9dfe-fcaf98d45d87/mysql-bootstrap/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.462831 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ccbab76c-f034-4f3b-9dfe-fcaf98d45d87/galera/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.619338 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_1fb249c2-c72b-4f50-bee6-8d461fc5b613/nova-metadata-metadata/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.675547 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_57e7a062-e8a4-457a-909c-7f7922327a1e/openstackclient/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.686523 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-99cll_98d689c7-0b2e-46b3-95f7-5c43aafac340/openstack-network-exporter/0.log" Jan 27 19:52:19 crc kubenswrapper[4853]: I0127 19:52:19.878718 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovsdb-server-init/0.log" Jan 27 19:52:20 crc kubenswrapper[4853]: I0127 19:52:20.107454 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovs-vswitchd/0.log" Jan 27 19:52:20 crc kubenswrapper[4853]: I0127 19:52:20.112332 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovsdb-server-init/0.log" Jan 27 19:52:20 crc kubenswrapper[4853]: I0127 19:52:20.148605 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qgd5v_2b33e2a4-b173-4d0a-b3b4-9ee1c3b92704/ovsdb-server/0.log" Jan 27 19:52:20 crc kubenswrapper[4853]: I0127 19:52:20.880185 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xkd2q_4d52eb59-75a5-4074-8bfb-c9dab8b0c97f/ovn-controller/0.log" Jan 27 19:52:20 crc kubenswrapper[4853]: I0127 19:52:20.991799 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-49xvk_a8e00930-5920-4f3f-9f05-62da3fdcdd88/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.092810 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_24f3c135-8664-4bbd-87bf-dd93c3595195/openstack-network-exporter/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.158888 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_24f3c135-8664-4bbd-87bf-dd93c3595195/ovn-northd/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.260707 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c1d29cf4-2fdf-46ef-8470-e42a8226dd7c/openstack-network-exporter/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.356931 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c1d29cf4-2fdf-46ef-8470-e42a8226dd7c/ovsdbserver-nb/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.468281 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_368a8f46-825c-43ad-803b-c7fdf6ca048c/openstack-network-exporter/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.526159 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_368a8f46-825c-43ad-803b-c7fdf6ca048c/ovsdbserver-sb/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.709870 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f5866f968-d652z_81ce3654-e156-4fa9-9399-3824ff16a228/placement-api/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.819870 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f5866f968-d652z_81ce3654-e156-4fa9-9399-3824ff16a228/placement-log/0.log" Jan 27 19:52:21 crc kubenswrapper[4853]: I0127 19:52:21.936363 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b6e38e4d-fbc2-4702-9767-e0376655776a/setup-container/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.095934 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b6e38e4d-fbc2-4702-9767-e0376655776a/rabbitmq/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.146449 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b6e38e4d-fbc2-4702-9767-e0376655776a/setup-container/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.154951 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e1ba655b-12d8-4f9d-882f-1d7faeb1f65f/setup-container/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.430592 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e1ba655b-12d8-4f9d-882f-1d7faeb1f65f/rabbitmq/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.453026 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e1ba655b-12d8-4f9d-882f-1d7faeb1f65f/setup-container/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.466101 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2lpzk_57fa2fc5-a6d4-444b-8e22-e4e9064ad6d7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.660395 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9hbp9_c936a14f-519a-4f53-a09b-f7cb85bcdd6b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:22 crc kubenswrapper[4853]: I0127 19:52:22.754218 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2xtfg_327b1d19-709e-4efa-b5b3-11513e5dbdac/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.013533 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qq4jx_42b00b77-5a5c-4880-a0f0-2556bab179fd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.099750 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-p45vd_eb152926-dd69-4634-9220-0074823b049b/ssh-known-hosts-edpm-deployment/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.316889 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dff6d999f-xr8nv_c029593d-ff63-4033-8bc5-39cf7e0457bd/proxy-server/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.395453 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dff6d999f-xr8nv_c029593d-ff63-4033-8bc5-39cf7e0457bd/proxy-httpd/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.459548 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pfpph_119564cc-719b-4691-91d5-672513ed9acf/swift-ring-rebalance/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.632759 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-auditor/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.676008 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-reaper/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.754330 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-replicator/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.819686 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-auditor/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.834485 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/account-server/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.926894 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-replicator/0.log" Jan 27 19:52:23 crc kubenswrapper[4853]: I0127 19:52:23.993471 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-server/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.054649 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/container-updater/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.093648 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-auditor/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.166225 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-expirer/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.265934 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-server/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.313825 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-replicator/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.335950 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/object-updater/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.411102 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/rsync/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.542328 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1859766-1c8c-471c-bae5-4ae46086e8a5/swift-recon-cron/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.753581 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2v797_7f436e8d-9923-47a6-ab8c-ee0c8e3bde82/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.852175 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6275c0bd-3255-4c3d-88bc-30f5d1ee27ca/tempest-tests-tempest-tests-runner/0.log" Jan 27 19:52:24 crc kubenswrapper[4853]: I0127 19:52:24.998199 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_34624963-57cc-4683-b919-e1b2e1183b0a/test-operator-logs-container/0.log" Jan 27 19:52:25 crc kubenswrapper[4853]: I0127 19:52:25.081357 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vwklk_51db77a7-69eb-4145-b87c-abfbb514f2c7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 19:52:37 crc kubenswrapper[4853]: I0127 19:52:37.507853 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_94965b7d-5efe-4ef3-aadf-41a550c47752/memcached/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.051946 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/util/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.187796 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/util/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.248065 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/pull/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.328196 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/pull/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.557152 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/extract/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.558004 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/pull/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.745684 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-jh7mx_f6e35929-3b14-49b4-9e0e-bbebc88c2ce2/manager/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.798562 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-sj29r_5db9a86f-dff3-4c54-a478-79ce384d78f7/manager/0.log" Jan 27 19:52:54 crc kubenswrapper[4853]: I0127 19:52:54.803817 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be46278bngd_3411d3c3-ab77-45ea-af40-a2708164348e/util/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.000456 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-mn6nj_0ee7eba6-8efe-4de9-bb26-69c5b47d0312/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.052896 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-n89p8_7b18ea7d-8f47-450b-aa4b-0b75fc0c0581/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.245749 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-bx595_89a3cd80-89b0-41f9-a469-ef001d9be747/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.346508 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-4pmv9_01b08d09-41bb-4a7a-9af2-7fe597572169/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.620187 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-dj2lw_e279285c-c536-46b4-b133-7c23811a725a/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.728075 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-qrs25_613d8e60-1314-45a2-8bcc-250151f708d1/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.890733 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-flf9f_bee4ca26-dd1a-4747-8bf3-f152d8236270/manager/0.log" Jan 27 19:52:55 crc kubenswrapper[4853]: I0127 19:52:55.930046 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-dbzp2_0a29796d-a7c3-480a-8379-4d4e7731d5b3/manager/0.log" Jan 27 19:52:56 crc kubenswrapper[4853]: I0127 19:52:56.115848 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-qkdmn_a5adf651-f6c5-4b00-a32f-bbd1ac9d5b43/manager/0.log" Jan 27 19:52:56 crc kubenswrapper[4853]: I0127 19:52:56.184950 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-gvm5r_ace486ae-a8c2-4aca-8719-528ecbed879f/manager/0.log" Jan 27 19:52:56 crc kubenswrapper[4853]: I0127 19:52:56.407713 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-mrb2s_d9757c33-a50c-4fa4-ab8d-270c2bed1459/manager/0.log" Jan 27 19:52:56 crc kubenswrapper[4853]: I0127 19:52:56.414029 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-cq4p4_85e5832e-902f-4f65-b659-60abf5d14654/manager/0.log" Jan 27 19:52:57 crc kubenswrapper[4853]: I0127 19:52:57.051749 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854gdmwx_9bd5a06a-f084-42ba-8f88-9be1cee0554a/manager/0.log" Jan 27 19:52:57 crc kubenswrapper[4853]: I0127 19:52:57.221048 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-67d88b5675-p6llj_fd2257c2-1b25-4d5f-8953-19f01df9c309/operator/0.log" Jan 27 19:52:57 crc kubenswrapper[4853]: I0127 19:52:57.447596 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nc7fr_7f5aa97a-2a3f-4a6d-8e75-521db38570d9/registry-server/0.log" Jan 27 19:52:57 crc kubenswrapper[4853]: I0127 19:52:57.720477 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-qgstv_f593e788-ce4a-47ad-a08c-96e1ec0cc92c/manager/0.log" Jan 27 19:52:57 crc kubenswrapper[4853]: I0127 19:52:57.840070 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-gl44q_7d1a71be-07cb-43e0-8584-75e5c48f4175/manager/0.log" Jan 27 19:52:58 crc kubenswrapper[4853]: I0127 19:52:58.024814 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2ls6m_98c9ef8d-ccf0-4c4e-83f3-53451532f0ad/operator/0.log" Jan 27 19:52:58 crc kubenswrapper[4853]: I0127 19:52:58.348289 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bn7wr_5b33f408-e905-4298-adfc-b113f89ecd36/manager/0.log" Jan 27 19:52:58 crc kubenswrapper[4853]: I0127 19:52:58.391997 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-mzmbv_e32b4f39-5c23-4e91-92bc-ffd6b7694a5a/manager/0.log" Jan 27 19:52:58 crc kubenswrapper[4853]: I0127 19:52:58.579386 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-cpsgc_8621d6dd-2bac-4631-bad9-ed1f5ce6c9b5/manager/0.log" Jan 27 19:52:58 crc kubenswrapper[4853]: I0127 19:52:58.622930 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bf776578d-kb6wk_fede2ab9-a2b5-45f5-bac7-daa8d576d23f/manager/0.log" Jan 27 19:52:59 crc kubenswrapper[4853]: I0127 19:52:59.214896 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-d4dcp_aacb2032-25f3-4faf-a0ca-f980411b4ae2/manager/0.log" Jan 27 19:53:19 crc kubenswrapper[4853]: I0127 19:53:19.207421 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t4wdl_5244d6c6-721d-44cf-8175-48408b3780b0/control-plane-machine-set-operator/0.log" Jan 27 19:53:19 crc kubenswrapper[4853]: I0127 19:53:19.376988 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kmkjx_bb1a45ce-530f-4492-a7e2-9432e194001d/machine-api-operator/0.log" Jan 27 19:53:19 crc kubenswrapper[4853]: I0127 19:53:19.396469 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kmkjx_bb1a45ce-530f-4492-a7e2-9432e194001d/kube-rbac-proxy/0.log" Jan 27 19:53:33 crc kubenswrapper[4853]: I0127 19:53:33.438072 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ltznx_ecbb7636-0b7d-4212-99ce-b28e191b5dde/cert-manager-controller/0.log" Jan 27 19:53:33 crc kubenswrapper[4853]: I0127 19:53:33.673619 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-b7s9n_36ad0b1f-b18e-48b1-84f2-bfe1343b1257/cert-manager-webhook/0.log" Jan 27 19:53:33 crc kubenswrapper[4853]: I0127 19:53:33.710893 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-85ml7_26226a5a-7c8e-4247-8441-43c981f5d894/cert-manager-cainjector/0.log" Jan 27 19:53:49 crc kubenswrapper[4853]: I0127 19:53:49.131027 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-bnd9d_fdac6187-5b6f-4375-a09a-42efb7d0eaf6/nmstate-console-plugin/0.log" Jan 27 19:53:49 crc kubenswrapper[4853]: I0127 19:53:49.315204 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dl7m9_b5d820ba-3b41-444c-b92b-1754909e56a0/kube-rbac-proxy/0.log" Jan 27 19:53:49 crc kubenswrapper[4853]: I0127 19:53:49.331994 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-t77h4_9f0d7951-c2e9-4857-a367-2426f842e3af/nmstate-handler/0.log" Jan 27 19:53:49 crc kubenswrapper[4853]: I0127 19:53:49.449501 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dl7m9_b5d820ba-3b41-444c-b92b-1754909e56a0/nmstate-metrics/0.log" Jan 27 19:53:49 crc kubenswrapper[4853]: I0127 19:53:49.530683 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-mj8nh_a903dd65-5d9d-48da-b24d-d9ae9ad3a734/nmstate-operator/0.log" Jan 27 19:53:49 crc kubenswrapper[4853]: I0127 19:53:49.627740 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-znknt_9fd55339-43c5-45d5-9789-2f69da655baf/nmstate-webhook/0.log" Jan 27 19:54:05 crc kubenswrapper[4853]: I0127 19:54:05.541399 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:54:05 crc kubenswrapper[4853]: I0127 19:54:05.543259 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:54:18 crc kubenswrapper[4853]: I0127 19:54:18.577935 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkdp4_471ac2ca-b99c-449c-b910-80b44e9a7941/kube-rbac-proxy/0.log" Jan 27 19:54:18 crc kubenswrapper[4853]: I0127 19:54:18.755245 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkdp4_471ac2ca-b99c-449c-b910-80b44e9a7941/controller/0.log" Jan 27 19:54:18 crc kubenswrapper[4853]: I0127 19:54:18.883907 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.061448 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.130259 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.136275 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.166328 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.329475 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.360433 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.395714 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.405608 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.554145 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-reloader/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.577255 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-frr-files/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.601563 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/cp-metrics/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.628808 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/controller/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.760643 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/frr-metrics/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.804894 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/kube-rbac-proxy/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.844999 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/kube-rbac-proxy-frr/0.log" Jan 27 19:54:19 crc kubenswrapper[4853]: I0127 19:54:19.987575 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/reloader/0.log" Jan 27 19:54:20 crc kubenswrapper[4853]: I0127 19:54:20.148405 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-srh2s_5d610e65-a0f1-4304-a7f9-f8b49e86d372/frr-k8s-webhook-server/0.log" Jan 27 19:54:20 crc kubenswrapper[4853]: I0127 19:54:20.310349 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d46b5cf6-rcn4b_729dbe0f-d26d-4eeb-b813-e4be40033e44/manager/0.log" Jan 27 19:54:20 crc kubenswrapper[4853]: I0127 19:54:20.483943 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-559d6879b9-6w56b_8a3f66ba-be42-476c-b03b-6ba6c92acd0f/webhook-server/0.log" Jan 27 19:54:20 crc kubenswrapper[4853]: I0127 19:54:20.723901 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2pvs_67e6561c-3f3b-45dd-b166-ca67a1abd96b/kube-rbac-proxy/0.log" Jan 27 19:54:21 crc kubenswrapper[4853]: I0127 19:54:21.326750 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l2pvs_67e6561c-3f3b-45dd-b166-ca67a1abd96b/speaker/0.log" Jan 27 19:54:21 crc kubenswrapper[4853]: I0127 19:54:21.353267 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4zj9c_a95a2a56-e8a9-418a-95ce-895b555038fa/frr/0.log" Jan 27 19:54:35 crc kubenswrapper[4853]: I0127 19:54:35.541025 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:54:35 crc kubenswrapper[4853]: I0127 19:54:35.541628 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:54:35 crc kubenswrapper[4853]: I0127 19:54:35.770615 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/util/0.log" Jan 27 19:54:36 crc kubenswrapper[4853]: I0127 19:54:36.559972 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/pull/0.log" Jan 27 19:54:36 crc kubenswrapper[4853]: I0127 19:54:36.603912 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/pull/0.log" Jan 27 19:54:36 crc kubenswrapper[4853]: I0127 19:54:36.688019 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/util/0.log" Jan 27 19:54:36 crc kubenswrapper[4853]: I0127 19:54:36.815310 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/pull/0.log" Jan 27 19:54:36 crc kubenswrapper[4853]: I0127 19:54:36.845868 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/util/0.log" Jan 27 19:54:36 crc kubenswrapper[4853]: I0127 19:54:36.880106 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxd5g7_13e74d47-44ea-4d71-abca-c805139dc4a9/extract/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.058843 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/util/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.236438 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/pull/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.259476 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/pull/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.272224 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/util/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.478969 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/pull/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.482879 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/extract/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.511384 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r6vfl_495b4ff2-7320-4ab3-b6d6-79c5d575cfe4/util/0.log" Jan 27 19:54:37 crc kubenswrapper[4853]: I0127 19:54:37.697982 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-utilities/0.log" Jan 27 19:54:38 crc kubenswrapper[4853]: I0127 19:54:38.368574 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-utilities/0.log" Jan 27 19:54:38 crc kubenswrapper[4853]: I0127 19:54:38.405186 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-content/0.log" Jan 27 19:54:38 crc kubenswrapper[4853]: I0127 19:54:38.445530 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-content/0.log" Jan 27 19:54:38 crc kubenswrapper[4853]: I0127 19:54:38.646747 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-utilities/0.log" Jan 27 19:54:38 crc kubenswrapper[4853]: I0127 19:54:38.658794 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/extract-content/0.log" Jan 27 19:54:38 crc kubenswrapper[4853]: I0127 19:54:38.913453 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-utilities/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.160386 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-content/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.200529 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-content/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.219904 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-utilities/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.232554 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9rvb_4ccbf17f-6d23-4e6e-85e3-73c1275e767b/registry-server/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.454950 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-content/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.468066 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/extract-utilities/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.736372 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5ppsb_a91a8685-4537-45c7-bb32-30b4885322b6/registry-server/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.737655 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-utilities/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.742965 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-x2rhc_beef4152-90a1-4027-8971-dd9dbdd93fb3/marketplace-operator/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.925968 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-content/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.947708 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-utilities/0.log" Jan 27 19:54:39 crc kubenswrapper[4853]: I0127 19:54:39.952220 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-content/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.097949 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-content/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.201675 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/extract-utilities/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.362719 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-utilities/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.599223 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kwgl_958cd7a3-4aba-4ee4-a63a-dc75ef76970f/registry-server/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.647831 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-content/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.655273 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-content/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.663155 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-utilities/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.870145 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-content/0.log" Jan 27 19:54:40 crc kubenswrapper[4853]: I0127 19:54:40.880386 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/extract-utilities/0.log" Jan 27 19:54:41 crc kubenswrapper[4853]: I0127 19:54:41.359035 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tftml_4535e463-44ac-45f4-befb-6e68eae6e688/registry-server/0.log" Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.541152 4853 patch_prober.go:28] interesting pod/machine-config-daemon-6gqj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.541778 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.541846 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.542972 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2"} pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.543049 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerName="machine-config-daemon" containerID="cri-o://1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" gracePeriod=600 Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.932234 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8a89b1e-bef8-4cb7-930c-480d3125778c" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" exitCode=0 Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.932289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" event={"ID":"b8a89b1e-bef8-4cb7-930c-480d3125778c","Type":"ContainerDied","Data":"1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2"} Jan 27 19:55:05 crc kubenswrapper[4853]: I0127 19:55:05.932333 4853 scope.go:117] "RemoveContainer" containerID="419f0caa718cb6b8323adeb425867a293b3b008799642768d85dc8419e85b6b1" Jan 27 19:55:06 crc kubenswrapper[4853]: E0127 19:55:06.191040 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:55:06 crc kubenswrapper[4853]: I0127 19:55:06.942778 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:55:06 crc kubenswrapper[4853]: E0127 19:55:06.943809 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:55:18 crc kubenswrapper[4853]: I0127 19:55:18.119748 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:55:18 crc kubenswrapper[4853]: E0127 19:55:18.120585 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:55:30 crc kubenswrapper[4853]: I0127 19:55:30.112088 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:55:30 crc kubenswrapper[4853]: E0127 19:55:30.112878 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:55:41 crc kubenswrapper[4853]: I0127 19:55:41.112681 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:55:41 crc kubenswrapper[4853]: E0127 19:55:41.113471 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:55:56 crc kubenswrapper[4853]: I0127 19:55:56.112556 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:55:56 crc kubenswrapper[4853]: E0127 19:55:56.113441 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:56:11 crc kubenswrapper[4853]: I0127 19:56:11.112154 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:56:11 crc kubenswrapper[4853]: E0127 19:56:11.112957 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:56:25 crc kubenswrapper[4853]: I0127 19:56:25.112776 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:56:25 crc kubenswrapper[4853]: E0127 19:56:25.113523 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:56:35 crc kubenswrapper[4853]: I0127 19:56:35.859788 4853 generic.go:334] "Generic (PLEG): container finished" podID="77545fdc-17ea-4903-90d1-43a6820c8521" containerID="3f5a4041af6238ff9d2193bfbc386898433451851efe614483be4e5854aaff48" exitCode=0 Jan 27 19:56:35 crc kubenswrapper[4853]: I0127 19:56:35.859876 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxfjx/must-gather-td98d" event={"ID":"77545fdc-17ea-4903-90d1-43a6820c8521","Type":"ContainerDied","Data":"3f5a4041af6238ff9d2193bfbc386898433451851efe614483be4e5854aaff48"} Jan 27 19:56:35 crc kubenswrapper[4853]: I0127 19:56:35.861344 4853 scope.go:117] "RemoveContainer" containerID="3f5a4041af6238ff9d2193bfbc386898433451851efe614483be4e5854aaff48" Jan 27 19:56:35 crc kubenswrapper[4853]: I0127 19:56:35.984053 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxfjx_must-gather-td98d_77545fdc-17ea-4903-90d1-43a6820c8521/gather/0.log" Jan 27 19:56:36 crc kubenswrapper[4853]: I0127 19:56:36.113239 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:56:36 crc kubenswrapper[4853]: E0127 19:56:36.113524 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:56:47 crc kubenswrapper[4853]: I0127 19:56:47.740378 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxfjx/must-gather-td98d"] Jan 27 19:56:47 crc kubenswrapper[4853]: I0127 19:56:47.741274 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sxfjx/must-gather-td98d" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="copy" containerID="cri-o://f179af1deffaf85085fe1d285f86cd59d68e8b32baa4e31528650248dcadf006" gracePeriod=2 Jan 27 19:56:47 crc kubenswrapper[4853]: I0127 19:56:47.753032 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxfjx/must-gather-td98d"] Jan 27 19:56:47 crc kubenswrapper[4853]: I0127 19:56:47.985513 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxfjx_must-gather-td98d_77545fdc-17ea-4903-90d1-43a6820c8521/copy/0.log" Jan 27 19:56:47 crc kubenswrapper[4853]: I0127 19:56:47.989898 4853 generic.go:334] "Generic (PLEG): container finished" podID="77545fdc-17ea-4903-90d1-43a6820c8521" containerID="f179af1deffaf85085fe1d285f86cd59d68e8b32baa4e31528650248dcadf006" exitCode=143 Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.696172 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxfjx_must-gather-td98d_77545fdc-17ea-4903-90d1-43a6820c8521/copy/0.log" Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.696885 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.828222 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77545fdc-17ea-4903-90d1-43a6820c8521-must-gather-output\") pod \"77545fdc-17ea-4903-90d1-43a6820c8521\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.828320 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79xz\" (UniqueName: \"kubernetes.io/projected/77545fdc-17ea-4903-90d1-43a6820c8521-kube-api-access-s79xz\") pod \"77545fdc-17ea-4903-90d1-43a6820c8521\" (UID: \"77545fdc-17ea-4903-90d1-43a6820c8521\") " Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.834891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77545fdc-17ea-4903-90d1-43a6820c8521-kube-api-access-s79xz" (OuterVolumeSpecName: "kube-api-access-s79xz") pod "77545fdc-17ea-4903-90d1-43a6820c8521" (UID: "77545fdc-17ea-4903-90d1-43a6820c8521"). InnerVolumeSpecName "kube-api-access-s79xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.930813 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s79xz\" (UniqueName: \"kubernetes.io/projected/77545fdc-17ea-4903-90d1-43a6820c8521-kube-api-access-s79xz\") on node \"crc\" DevicePath \"\"" Jan 27 19:56:48 crc kubenswrapper[4853]: I0127 19:56:48.989001 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77545fdc-17ea-4903-90d1-43a6820c8521-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "77545fdc-17ea-4903-90d1-43a6820c8521" (UID: "77545fdc-17ea-4903-90d1-43a6820c8521"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:56:49 crc kubenswrapper[4853]: I0127 19:56:49.001731 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxfjx_must-gather-td98d_77545fdc-17ea-4903-90d1-43a6820c8521/copy/0.log" Jan 27 19:56:49 crc kubenswrapper[4853]: I0127 19:56:49.002553 4853 scope.go:117] "RemoveContainer" containerID="f179af1deffaf85085fe1d285f86cd59d68e8b32baa4e31528650248dcadf006" Jan 27 19:56:49 crc kubenswrapper[4853]: I0127 19:56:49.002621 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxfjx/must-gather-td98d" Jan 27 19:56:49 crc kubenswrapper[4853]: I0127 19:56:49.033651 4853 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77545fdc-17ea-4903-90d1-43a6820c8521-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 19:56:49 crc kubenswrapper[4853]: I0127 19:56:49.040235 4853 scope.go:117] "RemoveContainer" containerID="3f5a4041af6238ff9d2193bfbc386898433451851efe614483be4e5854aaff48" Jan 27 19:56:50 crc kubenswrapper[4853]: I0127 19:56:50.112196 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:56:50 crc kubenswrapper[4853]: E0127 19:56:50.113210 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:56:50 crc kubenswrapper[4853]: I0127 19:56:50.123030 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" path="/var/lib/kubelet/pods/77545fdc-17ea-4903-90d1-43a6820c8521/volumes" Jan 27 19:57:04 crc kubenswrapper[4853]: I0127 19:57:04.113156 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:57:04 crc kubenswrapper[4853]: E0127 19:57:04.113874 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:57:19 crc kubenswrapper[4853]: I0127 19:57:19.114172 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:57:19 crc kubenswrapper[4853]: E0127 19:57:19.114934 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:57:34 crc kubenswrapper[4853]: I0127 19:57:34.112238 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:57:34 crc kubenswrapper[4853]: E0127 19:57:34.113669 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.239933 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7fv4x"] Jan 27 19:57:35 crc kubenswrapper[4853]: E0127 19:57:35.240946 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c2c2bb-5c51-45be-9f99-487e64108cdb" containerName="container-00" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.240961 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c2c2bb-5c51-45be-9f99-487e64108cdb" containerName="container-00" Jan 27 19:57:35 crc kubenswrapper[4853]: E0127 19:57:35.240975 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="copy" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.240980 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="copy" Jan 27 19:57:35 crc kubenswrapper[4853]: E0127 19:57:35.240992 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="gather" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.240998 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="gather" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.241275 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="gather" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.241300 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="77545fdc-17ea-4903-90d1-43a6820c8521" containerName="copy" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.241315 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c2c2bb-5c51-45be-9f99-487e64108cdb" containerName="container-00" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.242852 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.257285 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fv4x"] Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.445832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-catalog-content\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.445902 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-utilities\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.445949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbr6\" (UniqueName: \"kubernetes.io/projected/bb77b387-5ead-408b-bdd7-7b080e7d5df4-kube-api-access-lxbr6\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.548212 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-catalog-content\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.548274 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-utilities\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.548325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbr6\" (UniqueName: \"kubernetes.io/projected/bb77b387-5ead-408b-bdd7-7b080e7d5df4-kube-api-access-lxbr6\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.548898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-utilities\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.549015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-catalog-content\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.589013 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbr6\" (UniqueName: \"kubernetes.io/projected/bb77b387-5ead-408b-bdd7-7b080e7d5df4-kube-api-access-lxbr6\") pod \"redhat-operators-7fv4x\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:35 crc kubenswrapper[4853]: I0127 19:57:35.872675 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:36 crc kubenswrapper[4853]: I0127 19:57:36.452051 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fv4x"] Jan 27 19:57:36 crc kubenswrapper[4853]: I0127 19:57:36.481173 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerStarted","Data":"c44437e261eabe0409a5cd7c736d0dc2e633fabe2c6eb6668225198b1dc0ca37"} Jan 27 19:57:37 crc kubenswrapper[4853]: I0127 19:57:37.497938 4853 generic.go:334] "Generic (PLEG): container finished" podID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerID="6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713" exitCode=0 Jan 27 19:57:37 crc kubenswrapper[4853]: I0127 19:57:37.498037 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerDied","Data":"6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713"} Jan 27 19:57:37 crc kubenswrapper[4853]: I0127 19:57:37.502584 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:57:38 crc kubenswrapper[4853]: I0127 19:57:38.510151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerStarted","Data":"cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea"} Jan 27 19:57:39 crc kubenswrapper[4853]: I0127 19:57:39.523301 4853 generic.go:334] "Generic (PLEG): container finished" podID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerID="cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea" exitCode=0 Jan 27 19:57:39 crc kubenswrapper[4853]: I0127 19:57:39.523434 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerDied","Data":"cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea"} Jan 27 19:57:41 crc kubenswrapper[4853]: I0127 19:57:41.553462 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerStarted","Data":"993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44"} Jan 27 19:57:41 crc kubenswrapper[4853]: I0127 19:57:41.584861 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7fv4x" podStartSLOduration=4.141341662 podStartE2EDuration="6.584830557s" podCreationTimestamp="2026-01-27 19:57:35 +0000 UTC" firstStartedPulling="2026-01-27 19:57:37.50223277 +0000 UTC m=+4499.964775653" lastFinishedPulling="2026-01-27 19:57:39.945721665 +0000 UTC m=+4502.408264548" observedRunningTime="2026-01-27 19:57:41.576625883 +0000 UTC m=+4504.039168796" watchObservedRunningTime="2026-01-27 19:57:41.584830557 +0000 UTC m=+4504.047373440" Jan 27 19:57:45 crc kubenswrapper[4853]: I0127 19:57:45.873393 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:45 crc kubenswrapper[4853]: I0127 19:57:45.874569 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:46 crc kubenswrapper[4853]: I0127 19:57:46.914412 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fv4x" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="registry-server" probeResult="failure" output=< Jan 27 19:57:46 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Jan 27 19:57:46 crc kubenswrapper[4853]: > Jan 27 19:57:47 crc kubenswrapper[4853]: I0127 19:57:47.113710 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:57:47 crc kubenswrapper[4853]: E0127 19:57:47.114360 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:57:55 crc kubenswrapper[4853]: I0127 19:57:55.927961 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:55 crc kubenswrapper[4853]: I0127 19:57:55.983730 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:56 crc kubenswrapper[4853]: I0127 19:57:56.167581 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fv4x"] Jan 27 19:57:57 crc kubenswrapper[4853]: I0127 19:57:57.736817 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7fv4x" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="registry-server" containerID="cri-o://993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44" gracePeriod=2 Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.194512 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.267761 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxbr6\" (UniqueName: \"kubernetes.io/projected/bb77b387-5ead-408b-bdd7-7b080e7d5df4-kube-api-access-lxbr6\") pod \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.267855 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-catalog-content\") pod \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.267951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-utilities\") pod \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\" (UID: \"bb77b387-5ead-408b-bdd7-7b080e7d5df4\") " Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.270454 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-utilities" (OuterVolumeSpecName: "utilities") pod "bb77b387-5ead-408b-bdd7-7b080e7d5df4" (UID: "bb77b387-5ead-408b-bdd7-7b080e7d5df4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.278436 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb77b387-5ead-408b-bdd7-7b080e7d5df4-kube-api-access-lxbr6" (OuterVolumeSpecName: "kube-api-access-lxbr6") pod "bb77b387-5ead-408b-bdd7-7b080e7d5df4" (UID: "bb77b387-5ead-408b-bdd7-7b080e7d5df4"). InnerVolumeSpecName "kube-api-access-lxbr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.370114 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.370172 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxbr6\" (UniqueName: \"kubernetes.io/projected/bb77b387-5ead-408b-bdd7-7b080e7d5df4-kube-api-access-lxbr6\") on node \"crc\" DevicePath \"\"" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.402147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb77b387-5ead-408b-bdd7-7b080e7d5df4" (UID: "bb77b387-5ead-408b-bdd7-7b080e7d5df4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.472339 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb77b387-5ead-408b-bdd7-7b080e7d5df4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.747736 4853 generic.go:334] "Generic (PLEG): container finished" podID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerID="993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44" exitCode=0 Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.747792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerDied","Data":"993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44"} Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.747828 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fv4x" event={"ID":"bb77b387-5ead-408b-bdd7-7b080e7d5df4","Type":"ContainerDied","Data":"c44437e261eabe0409a5cd7c736d0dc2e633fabe2c6eb6668225198b1dc0ca37"} Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.747840 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fv4x" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.747854 4853 scope.go:117] "RemoveContainer" containerID="993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.766462 4853 scope.go:117] "RemoveContainer" containerID="cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.781240 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fv4x"] Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.790548 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7fv4x"] Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.809428 4853 scope.go:117] "RemoveContainer" containerID="6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.842323 4853 scope.go:117] "RemoveContainer" containerID="993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44" Jan 27 19:57:58 crc kubenswrapper[4853]: E0127 19:57:58.842744 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44\": container with ID starting with 993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44 not found: ID does not exist" containerID="993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.842783 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44"} err="failed to get container status \"993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44\": rpc error: code = NotFound desc = could not find container \"993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44\": container with ID starting with 993651458fabd6bdd62751389e51e4a51acbe85d766ce89c270c00389e726f44 not found: ID does not exist" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.842806 4853 scope.go:117] "RemoveContainer" containerID="cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea" Jan 27 19:57:58 crc kubenswrapper[4853]: E0127 19:57:58.843033 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea\": container with ID starting with cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea not found: ID does not exist" containerID="cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.843081 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea"} err="failed to get container status \"cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea\": rpc error: code = NotFound desc = could not find container \"cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea\": container with ID starting with cb4a6b8c586999d0eefa3cdd0d768de833624bdd38f19cc6de98b51e88cba9ea not found: ID does not exist" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.843114 4853 scope.go:117] "RemoveContainer" containerID="6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713" Jan 27 19:57:58 crc kubenswrapper[4853]: E0127 19:57:58.843527 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713\": container with ID starting with 6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713 not found: ID does not exist" containerID="6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713" Jan 27 19:57:58 crc kubenswrapper[4853]: I0127 19:57:58.843551 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713"} err="failed to get container status \"6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713\": rpc error: code = NotFound desc = could not find container \"6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713\": container with ID starting with 6c8d555a110b00b6d566456cbaa175caad02c1f067524d648dfc82acca8bd713 not found: ID does not exist" Jan 27 19:57:59 crc kubenswrapper[4853]: I0127 19:57:59.112371 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:57:59 crc kubenswrapper[4853]: E0127 19:57:59.112860 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:58:00 crc kubenswrapper[4853]: I0127 19:58:00.124782 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" path="/var/lib/kubelet/pods/bb77b387-5ead-408b-bdd7-7b080e7d5df4/volumes" Jan 27 19:58:12 crc kubenswrapper[4853]: I0127 19:58:12.112543 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:58:12 crc kubenswrapper[4853]: E0127 19:58:12.113483 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:58:27 crc kubenswrapper[4853]: I0127 19:58:27.113367 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:58:27 crc kubenswrapper[4853]: E0127 19:58:27.114249 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:58:39 crc kubenswrapper[4853]: I0127 19:58:39.112934 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:58:39 crc kubenswrapper[4853]: E0127 19:58:39.114225 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:58:52 crc kubenswrapper[4853]: I0127 19:58:52.113280 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:58:52 crc kubenswrapper[4853]: E0127 19:58:52.116607 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:59:06 crc kubenswrapper[4853]: I0127 19:59:06.113793 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:59:06 crc kubenswrapper[4853]: E0127 19:59:06.114692 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:59:21 crc kubenswrapper[4853]: I0127 19:59:21.112501 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:59:21 crc kubenswrapper[4853]: E0127 19:59:21.113533 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:59:35 crc kubenswrapper[4853]: I0127 19:59:35.113652 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:59:35 crc kubenswrapper[4853]: E0127 19:59:35.115147 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.246394 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nx7hx"] Jan 27 19:59:40 crc kubenswrapper[4853]: E0127 19:59:40.247727 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="extract-utilities" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.247748 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="extract-utilities" Jan 27 19:59:40 crc kubenswrapper[4853]: E0127 19:59:40.247767 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="extract-content" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.247776 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="extract-content" Jan 27 19:59:40 crc kubenswrapper[4853]: E0127 19:59:40.247826 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="registry-server" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.247835 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="registry-server" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.248090 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb77b387-5ead-408b-bdd7-7b080e7d5df4" containerName="registry-server" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.250107 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.264920 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nx7hx"] Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.418215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5p8j\" (UniqueName: \"kubernetes.io/projected/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-kube-api-access-v5p8j\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.418294 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-utilities\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.419010 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-catalog-content\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.521485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5p8j\" (UniqueName: \"kubernetes.io/projected/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-kube-api-access-v5p8j\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.521573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-utilities\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.521640 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-catalog-content\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.522225 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-catalog-content\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.522292 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-utilities\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.549243 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5p8j\" (UniqueName: \"kubernetes.io/projected/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-kube-api-access-v5p8j\") pod \"certified-operators-nx7hx\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:40 crc kubenswrapper[4853]: I0127 19:59:40.579625 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:41 crc kubenswrapper[4853]: I0127 19:59:41.151736 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nx7hx"] Jan 27 19:59:41 crc kubenswrapper[4853]: I0127 19:59:41.186221 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerStarted","Data":"33d586789c6b995066282fc1d4eed3d6c14b5637dbf667c8183d3fa68be75c39"} Jan 27 19:59:42 crc kubenswrapper[4853]: I0127 19:59:42.198201 4853 generic.go:334] "Generic (PLEG): container finished" podID="397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" containerID="9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0" exitCode=0 Jan 27 19:59:42 crc kubenswrapper[4853]: I0127 19:59:42.198269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerDied","Data":"9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0"} Jan 27 19:59:43 crc kubenswrapper[4853]: I0127 19:59:43.216389 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerStarted","Data":"52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d"} Jan 27 19:59:44 crc kubenswrapper[4853]: I0127 19:59:44.231065 4853 generic.go:334] "Generic (PLEG): container finished" podID="397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" containerID="52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d" exitCode=0 Jan 27 19:59:44 crc kubenswrapper[4853]: I0127 19:59:44.231167 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerDied","Data":"52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d"} Jan 27 19:59:45 crc kubenswrapper[4853]: I0127 19:59:45.244166 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerStarted","Data":"086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b"} Jan 27 19:59:45 crc kubenswrapper[4853]: I0127 19:59:45.274022 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nx7hx" podStartSLOduration=2.850673011 podStartE2EDuration="5.273986198s" podCreationTimestamp="2026-01-27 19:59:40 +0000 UTC" firstStartedPulling="2026-01-27 19:59:42.202235529 +0000 UTC m=+4624.664778412" lastFinishedPulling="2026-01-27 19:59:44.625548696 +0000 UTC m=+4627.088091599" observedRunningTime="2026-01-27 19:59:45.263769247 +0000 UTC m=+4627.726312140" watchObservedRunningTime="2026-01-27 19:59:45.273986198 +0000 UTC m=+4627.736529091" Jan 27 19:59:47 crc kubenswrapper[4853]: I0127 19:59:47.112682 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:59:47 crc kubenswrapper[4853]: E0127 19:59:47.113405 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c" Jan 27 19:59:50 crc kubenswrapper[4853]: I0127 19:59:50.580490 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:50 crc kubenswrapper[4853]: I0127 19:59:50.581273 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:50 crc kubenswrapper[4853]: I0127 19:59:50.630633 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:51 crc kubenswrapper[4853]: I0127 19:59:51.352292 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:51 crc kubenswrapper[4853]: I0127 19:59:51.422601 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nx7hx"] Jan 27 19:59:53 crc kubenswrapper[4853]: I0127 19:59:53.319522 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nx7hx" podUID="397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" containerName="registry-server" containerID="cri-o://086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b" gracePeriod=2 Jan 27 19:59:53 crc kubenswrapper[4853]: I0127 19:59:53.855370 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.044169 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-utilities\") pod \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.044302 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-catalog-content\") pod \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.044591 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5p8j\" (UniqueName: \"kubernetes.io/projected/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-kube-api-access-v5p8j\") pod \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\" (UID: \"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a\") " Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.045530 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-utilities" (OuterVolumeSpecName: "utilities") pod "397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" (UID: "397c27f7-d93e-4b95-9e04-3b9cbf2cad6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.062462 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-kube-api-access-v5p8j" (OuterVolumeSpecName: "kube-api-access-v5p8j") pod "397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" (UID: "397c27f7-d93e-4b95-9e04-3b9cbf2cad6a"). InnerVolumeSpecName "kube-api-access-v5p8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.147117 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5p8j\" (UniqueName: \"kubernetes.io/projected/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-kube-api-access-v5p8j\") on node \"crc\" DevicePath \"\"" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.148163 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.333012 4853 generic.go:334] "Generic (PLEG): container finished" podID="397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" containerID="086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b" exitCode=0 Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.333083 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerDied","Data":"086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b"} Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.333155 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx7hx" event={"ID":"397c27f7-d93e-4b95-9e04-3b9cbf2cad6a","Type":"ContainerDied","Data":"33d586789c6b995066282fc1d4eed3d6c14b5637dbf667c8183d3fa68be75c39"} Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.333187 4853 scope.go:117] "RemoveContainer" containerID="086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.334781 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx7hx" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.357623 4853 scope.go:117] "RemoveContainer" containerID="52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.390616 4853 scope.go:117] "RemoveContainer" containerID="9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.437985 4853 scope.go:117] "RemoveContainer" containerID="086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b" Jan 27 19:59:54 crc kubenswrapper[4853]: E0127 19:59:54.438669 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b\": container with ID starting with 086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b not found: ID does not exist" containerID="086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.438726 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b"} err="failed to get container status \"086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b\": rpc error: code = NotFound desc = could not find container \"086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b\": container with ID starting with 086ccd55a1b833bb620b86479dd8c0fe8826a5cdcc9348f64ae68594522ae26b not found: ID does not exist" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.438756 4853 scope.go:117] "RemoveContainer" containerID="52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d" Jan 27 19:59:54 crc kubenswrapper[4853]: E0127 19:59:54.439408 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d\": container with ID starting with 52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d not found: ID does not exist" containerID="52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.439469 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d"} err="failed to get container status \"52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d\": rpc error: code = NotFound desc = could not find container \"52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d\": container with ID starting with 52eca1509e10e17678f301034c51cd94d91506de087fdf08f3fb83dd0110b57d not found: ID does not exist" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.439511 4853 scope.go:117] "RemoveContainer" containerID="9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0" Jan 27 19:59:54 crc kubenswrapper[4853]: E0127 19:59:54.439984 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0\": container with ID starting with 9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0 not found: ID does not exist" containerID="9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.440147 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0"} err="failed to get container status \"9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0\": rpc error: code = NotFound desc = could not find container \"9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0\": container with ID starting with 9bb1b64f6474505566d599334adc0906df4c771c7032a587b70bf3646001d0e0 not found: ID does not exist" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.571829 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" (UID: "397c27f7-d93e-4b95-9e04-3b9cbf2cad6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.657623 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.682391 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nx7hx"] Jan 27 19:59:54 crc kubenswrapper[4853]: I0127 19:59:54.694793 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nx7hx"] Jan 27 19:59:56 crc kubenswrapper[4853]: I0127 19:59:56.130970 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397c27f7-d93e-4b95-9e04-3b9cbf2cad6a" path="/var/lib/kubelet/pods/397c27f7-d93e-4b95-9e04-3b9cbf2cad6a/volumes" Jan 27 19:59:59 crc kubenswrapper[4853]: I0127 19:59:59.112678 4853 scope.go:117] "RemoveContainer" containerID="1495a7d2ce41db54f43cd71f1504a040594d9f9b2b41463b9061b57b26e8d3c2" Jan 27 19:59:59 crc kubenswrapper[4853]: E0127 19:59:59.113832 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6gqj2_openshift-machine-config-operator(b8a89b1e-bef8-4cb7-930c-480d3125778c)\"" pod="openshift-machine-config-operator/machine-config-daemon-6gqj2" podUID="b8a89b1e-bef8-4cb7-930c-480d3125778c"